Computational Design

Artist Daito Manabe uses computational design to create unique and extraordinary art. We visit Manabe's studio to explore the secrets behind his bold vision. Discover how the latest digital technology resonates and interacts with the human body, pushing new boundaries in Manabe's artworks and designs. Uncover the design potential of computational technology.

Transcript

00:12

A solo dancer, moving on stage.

00:22

Beside them... A virtual dancer.

00:29

It was created by an AI that has studied human dance.

00:38

The evolution of computing has made it possible for artificial intelligence to design sounds and images: a new design frontier.

00:51

This is called computational design.

00:56

Today's guest is Daito Manabe, an artist who uses computational design to produce breathtaking images.

01:11

His works include this cutting-edge live performance by Japanese band Perfume.

01:19

He's worked with a range of artists to create visuals based on computational design.

01:30

Explore the potential of new designs that leverage AI.

01:37

Hello. Welcome to design talks plus.

01:39

I'm Andrea Pompilio.

01:40

Hi, I'm Shaula.

01:42

So, Shaula. Today, we will take a peek into the future of design.

01:45

Yeah, that's right.

01:46

We're actually going to Daito Manabe's studio right around the corner.

01:50

I'm really looking for to seeing what he is working on.

01:52

Great.

01:53

Let's go.

01:55

Andy and Shaula are visiting Manabe's personal studio.

02:00

Manabe-san.

02:02

Hello.

02:03

Hello, I'm Andy.

02:05

Hi, thank you for coming.

02:07

I'm Shaula. A pleasure to meet you.

02:10

I understand this is your personal studio, is that right?

02:15

Yes, I hole up in here to work on my projects.

02:19

It's quite big, isn't it?

02:21

Yes.

02:22

The high ceiling allows me to put speakers up there.

02:26

Oh yes!

02:28

A very versatile space.

02:32

And useful for filming too.

02:34

You have a lot of equipment. I can see a DJ booth.

02:39

I started DJ-ing at 15 or 16, so I've been doing it for around 30 years now.

02:46

That's a lot of records!

02:49

I have five or six thousand.

02:52

Five thousand records?

02:55

I feel like music is what got me into my current line of work.

03:00

Musical instruments too.

03:02

Yeah.

03:04

I often work with musicians.

03:06

There's a self-playing piano as well.

03:10

It plays itself?

03:11

Yes, it does.

03:13

I'm actually working on a project with Ryuichi Sakamoto right now.

03:18

The self-playing piano is going to be a part of that work.

03:22

An ongoing project.

03:24

That's right, yes.

03:27

Manabe shows Andy and Shaula a program that allows them to experience computational design.

03:37

Should we do something?

03:38

Move around?

03:40

You're fine as you are.

03:41

Just like this.

03:45

I'll move around a little.

03:47

Now let's see how it turns out.

03:50

That's good.

03:53

The AI analyzes the image and overlays a visual effect.

03:58

Computational design can make this happen in a snap.

04:11

Our theme for this episode is computational design.

04:15

Manabe-san, what does that term mean to you?

04:19

It's used very differently in different fields.

04:22

I personally work in the arts and entertainment.

04:26

It used to be that a person would have to draw a long series of sketches in order to create a short video sequence.

04:34

Now, software can automate the whole process.

04:37

You only need to draw one graph and you can change the size of a ball along the whole sequence, for example.

04:43

The computer handles all of the calculations involved.

04:48

But the most extraordinary changes are coming in the field of machine learning.

04:53

This is one field in a broader spectrum that most people would know as artificial intelligence, or AI.

05:00

The last decade has seen an astonishing evolution.

05:08

Manabe's collaboration with a traditional Japanese artform won acclaim.

05:17

A kyogen performance by Nomura Mansai in 2017.

05:24

He danced Sanbaso, a historic piece performed during festive occasions such as the new year.

05:35

3D images unfolded behind him, in harmony with his movements.

05:49

It seemed as though the live performer and digital images were dancing together.

06:01

The visuals were the product of a computer analyzing Nomura's dance.

06:10

Four months before the performance, he donned a special suit with markers at each of his joints.

06:22

The technology is called motion capture.

06:29

His walking speed and stride length were all digitized.

06:38

They then used 48 cameras to capture Nomura's movements from every possible angle...

06:47

...and carried out a full volumetric capture.

06:54

Together with the motion capture data, the program was able to produce a digital dancer.

07:01

Manabe's computational design visuals begin by capturing and digitizing an analog source.

07:11

It took months of work to produce visuals that would enhance the unique atmosphere of kyogen performance.

07:22

AI has also been used for other performances.

07:32

Images of the audience taken before this Perfume concert were used to show them recreating the group's signature dances.

07:58

The band also uploaded the lyrics to all of their songs and had them analyzed.

08:02

The most common words were reconfigured into an astonishing new performance.

08:18

The kyogen performance was, I believe in 2016 or '17.

08:24

That was one approach to computational design.

08:27

But nowadays you don't even need the motion-capture markers.

08:31

The AI software has evolved beyond that to recognize human poses without them.

08:38

In just five or six years?

08:40

Yes, it can simply read the movements and poses.

08:43

In just five years.

08:45

That seems incredibly fast for such improvement.

08:49

It's incredible.

08:51

It's really picked up a lot of speed since 2017.

08:56

There are some things that can only be done with a computer.

09:00

The Perfume project took all of their lyrics and laid out the most common words in order of frequency.

09:07

I guess you could get some poor intern to do all that.

09:12

But it's just so much faster to get a computer to sift through it all and figure out that "you" is the top word, then "I" is the second, "light" the third, and so on.

09:22

I think that information leads to some fascinating new places.

09:28

So when we hear 'computational design' is this what people are talking about?

09:35

A technology with unlimited potential?

09:37

I think so, yes.

09:40

But people use different terminology.

09:42

I hear 'media art' a lot as well.

09:45

Oh yes.

09:46

Or digital art.

09:47

The kinds of project that you're involved in - do you think of your work as art?

09:53

Or as design?

09:57

I suppose my perspective is that when we come up with an idea ourselves, I think of it as art.

10:04

But when we're asked to come up with an idea to help something go viral online, for example, then that's design.

10:11

In that case, we've been given a problem and we're working towards a solution.

10:17

That, for me, is design.

10:19

Posing, and solving, problems.

10:21

It's about the solution.

10:23

Exactly, yes.

10:26

For Manabe, a design solves a problem.

10:30

He showcases another shining example.

10:34

A program that lets people follow the movement of a fencing blade in real time.

10:42

It was actually used in competition?

10:45

Yes, it was.

10:46

This was employed at an international competition.

10:49

Interesting.

10:51

It makes watching so much easier!

10:55

Fencing is fascinating but the blades move so fast, and bouts often end in the blink of an eye.

11:02

I was asked to find a way to make them more entertaining.

11:06

Intriguing idea.

11:08

So I thought, what if we could follow the tips of the blades with cameras, and then leverage computation to make all of those movements visible?

11:19

That could be fun.

11:20

It's such a great idea.

11:23

I made the proposal, and at first we had to put markers on the blades, like the kyogen motion capture.

11:31

Of course that makes it easy to capture, but it's not something you can do for an actual competition bout.

11:38

All this was way back in 2013.

11:42

But I made the prediction that by the 2020 Tokyo Olympics, AI technology would have improved enough to do this without needing to use markers.

11:53

And that prediction turned out to be correct.

11:56

You made the deadline?

11:58

Yes, this was used for the fencing at the Tokyo Olympics.

12:02

How difficult is this technology?

12:06

So at the beginning, you have to gather a huge amount of images and add the location of the tip of the blade by hand.

12:16

We'd film a practice bout, break it down into thousands of pictures, and have the part-timers at the company add the location of the blade tip.

12:25

A big team?

12:26

It took twenty people months to complete over 200,000 images.

12:31

That was the training data.

12:34

Exactly.

12:35

The AI was able to learn from that data.

12:38

Then, when we showed it new, unknown images, it was able to make a prediction on the approximate location of the blade tip.

12:50

This one is more of an art project, I think.

12:55

This project asks questions: it's an artwork.

13:01

Data from the Tokyo Stock Exchange was used to show the sale of stocks in real-time through images and sound.

13:10

The result is a visual representation of movements across the entire market and intercompany connections, displayed in a three-dimensional space.

13:21

We made this at a time when the concept of artificial intelligence wasn't so mainstream.

13:28

But I knew it would make so many things easier.

13:32

I also knew it had the potential to cause problems.

13:36

So I wanted to make an artwork with AI at its heart.

13:42

This was in 2013, but one of the biggest uses for AI back then was actually to make automated stock transactions.

13:50

I see.

13:52

Now, you called this an artwork, not a design project.

13:56

I feel the border is quite fuzzy.

14:00

Here it seems like you're asking questions, but at the same time, you're presenting a solution.

14:08

You're taking the problem head on.

14:10

I feel like you have both of those elements at play in this piece.

14:15

I agree.

14:17

And of course there are some people who see this and feel that it's more design than art.

14:23

I'm sure.

14:25

As you can see, all of these moving graphs and data are making interactions on the stock exchange visible.

14:33

You can see it reflected on the share price of various brands.

14:37

It's visible.

14:39

Because of this, some people feel that this work falls more under the umbrella of design.

14:46

I think that's a valid perspective to take.

14:49

I'm happy for people to make up their own minds.

14:56

Daito Manabe drifts freely between art and design.

15:02

His personal history plays a major role in his creativity.

15:09

He was born in Tokyo in 1976.

15:14

It was a musical family - his father played bass and his mother keyboard.

15:22

He bought his first DJ turntable aged 16.

15:29

After polishing his skills, he moved to New York and began working with local artists.

15:38

At 24, he turned to the study of media art.

15:45

Manabe dug into computational design at a technology and art school in Gifu prefecture.

15:53

After graduating, he and some friends set up creative unit Rhizomatiks.

15:58

They've created countless artworks together.

16:04

Manabe first won major public attention with this work.

16:12

It's a program that turns music into electronic signals.

16:18

The signals travel to electrodes that tighten facial muscles, making the wearer's face twitch in time with the music.

16:28

Manabe's early immersion in music inspired him to find new ways to make music visible.

16:37

This inspiration was on full display during a live stream in 2020.

16:45

Computer graphics were created in real-time to the sounds of DJ KRUSH.

16:50

The result was a work people could enjoy with their eyes and ears.

17:09

This live performance marked a thousand days before the Tokyo Olympics.

17:14

The background is made up of real-time messages sent by the audience to their future selves.

17:29

Making sound visible, and interactive.

17:35

This work combines both of these elements.

17:40

An enormous installation exhibited at the National Museum of Modern Art in 2021.

17:47

Countless balls travel along rails, apparently lighting up the room.

17:54

The location of each ball is actually being tracked by cameras, and illuminated by lasers.

18:04

Multi-channel sound reacts to the location of the balls, creating a fantastical symphony of light and sound.

18:18

What is your relationship with sound?

18:21

My first love was regular music.

18:24

Songs of a fixed length - the kind of thing that would be sold on CDs, or played by DJs.

18:33

But after a while, I moved on to truly computational music.

18:39

I started to make sounds and music that was generated autonomously, following algorithmic or mathematical rules.

18:48

I actually studied math at university.

18:52

Towards the end of my course, I began to really dig into that aspect of sound production.

18:59

We've seen several works that turn music or sounds into images.

19:03

As a DJ, I'm sure you're familiar with the way that a live performance has its own unique energy that only the audience can sense.

19:15

But looking at your work, I feel like you're really able to translate that atmosphere into something visual for people who can't be there.

19:25

Maybe even give them a taste of how it feels to be in that atmosphere, together with the audience and performers.

19:31

I really feel that potential.

19:35

I do think there are certain kinds of music that become even better when accompanied by something visual.

19:42

Music alone can create an incredible sense of unity in a venue.

19:48

But when you're streaming an event online, for example, it's often better if there are visuals as well.

19:55

Right.

19:57

You know when you're in a venue, there's something happening everywhere you look.

20:01

But when you're just watching the stream, you might only get a shot of the DJ's hands.

20:08

We went into the stream with KRUSH knowing it would be streamed.

20:12

So we specifically set out to create visuals that would enhance viewers' experience of the music, that would allow them to immerse themselves, even through the stream.

20:24

So, what is your objective with these projects?

20:27

What kind of role does the audience play?

20:32

Is audience interaction something that you always focus on?

20:37

I don't necessarily think art needs to be evaluated for it to be art.

20:42

But at the same time, no artwork is truly complete without an audience.

20:47

Until someone sees it and thinks about it, it remains unfinished.

20:53

So one question is, what will the audience think?

20:57

How can we change it to react to the movement of the audience?

21:01

Making an interactive work is fascinating, because the audience moves and that changes how the visuals play out.

21:09

When the visuals change, the audience reacts to that change, which starts a feedback loop.

21:14

So it's a question of, how do I make the most effective loop?

21:18

Interesting.

21:20

That's definitely part of my thinking.

21:24

You gravitate to some really interesting projects.

21:28

How much of this is because you let yourself pursue whatever interests you?

21:33

I think that's definitely the case for much of my work, yes.

21:37

I'm interested in my own body and its responses.

21:42

And more recently, I'm also interested in my brain.

21:47

At the end of the day, I think I'm making things that I personally want to see.

21:52

So yes, I would hope that the bulk of my work continues to be about things that interest me.

22:01

Manabe's creativity continues to evolve, guided by his own taste.

22:07

In October 2018, Manabe presented a new stage piece.

22:19

An AI learned dancers' movements and created a new choreography, which it performed alongside real dancers.

22:27

It's an extraordinary new approach.

22:34

Manabe looks at it as a way to expand physical human expression through computational design.

22:48

This is his most cutting-edge project.

22:54

The user thinks of a pose in their mind, and technology known as 'brain decoding' recreates that pose on a computer.

23:09

In other words, a person lying completely still could move a virtual body just by thinking.

23:20

Manabe used this to create a dance sequence in his imagination.

23:27

It's still at the simulation stage, and this is just a prototype.

23:31

But Manabe has high hopes it could lead to new forms of expression.

23:48

That's amazing.

23:49

It is - would you mind explaining the details to us?

23:52

Is the software part of a larger research project?

23:56

Well, to start you have to have an MRI.

24:00

During the scan you might be shown images of flowers or the sea.

24:05

Or maybe chairs.

24:07

You see images.

24:09

Yes, inside the MRI.

24:12

And it captures data on your brain's reactions to seeing flowers, or on seeing a chair.

24:18

The first step is to pair up the data from your brain with the image data.

24:25

The next stage is to visualize something in your mind, without revealing what it is.

24:32

The MRI captures data from your visual cortex and a technology called 'brain decoding' predicts that you're imagining flowers, for example.

24:42

I see.

24:44

Brain decoding.

24:47

Yes, it's called brain decoding.

24:50

It's a technology that we're using in a number of different projects at the moment.

24:55

The latest one is the dance prototype.

24:58

Essentially, you visualize certain poses from inside the MRI.

25:03

Just in your mind.

25:05

That's right.

25:07

And the system actually reads what you're thinking.

25:11

If I were to visualize a whole dance, for example, it would capture data for that dance.

25:17

That's incredible.

25:18

And so you can make a virtual avatar dance.

25:22

I can see a lot of possible applications for this.

25:26

For example, you could visualize yourself doing something, then that would be turned into actual images,

25:32

and seeing those images could be a great way to help yourself overcome a mental block.

25:38

I feel like this could be super helpful for learning new skills.

25:43

There are also so many things we can't do in the physical world, but which avatars could potentially do for us in the metaverse one day.

25:54

On the other hand, there are definitely lines that mustn't be crossed in a virtual setting, and these lines need to be clearly defined.

26:03

We need to decide exactly how far computational design should go.

26:10

How much freedom devices like this should be given is a difficult question and a very important challenge.

26:18

How far do you want to take your work?

26:20

And what purpose do you want to use it for?

26:26

I think the pandemic has helped me reexamine the core of my work.

26:31

What is it that actually interests me in this space?

26:35

I'm using this opportunity to reconsider my work through that lens now.

26:40

And the answer, for me, is sound.

26:42

It's why I built the space we're in here.

26:48

Sound is the heart of it, and I'd like to go back to basics, and see what computational design can do in that field.

26:57

I'm sorry to say I don't have some overarching ambition to tell you about!

27:02

No, no, not at all!

27:04

The fact that you feel compelled to return to your roots seems to me almost like a sign that this is something not just you, but the whole world is really aching for.

27:14

It's been wonderful to have this chance to talk to you today.

27:19

Thank you so much.

27:22

Thank you.