Albert Bozesan

The Surprising Way AI Makes Us More Human - TEDxTUM Talk

Can machines tell great stories? In this talk, I focus on reframing AI not as a competitor to us, nor as a collaborator—instead introducing a third option that can help us be better humans.

While not being emotional themselves, AIs have proven that they can communicate emotions very well, and tell effective stories. This may seem counterintuitive, but emotions in storytelling don't necessarily happen in the storyteller, but rather in the audience. There are plenty of examples of creative humans that have made us feel good things and learn important values—without themselves having been especially moral.

This automation of emotional communication can sound dystopian, but could be an incredible boon to our society and world—if used to tackle those humanity-sized threats that science and facts have been unable to.

How can I use Generative AI ethically?

As users, we currently have very little control over what data goes into training AI models. In this space, there are efforts to create transparency—such as Stable Diffusion using the publicly available LAION dataset. But without hundreds of millions of dollars, we have to wait it out until the first models appear that are fully made with artists' permission.

The good news is that Generative AI image models are not collage machines that combine "stolen images" into new ones. This is a mistake made by many. The original dataset is not present in the AI model, which is why lawsuits are being decided for the AI creators. Germany even has a law on the books allowing AI training on copyrighted materials, § 44b Urheberrechtsgesetz.

It's also why I don't mind images of my face being in the dataset without my permission—it's completely impossible to reconstruct or steal it. All it's doing is helping the AI understand what a male, caucasian human face looks like.

Nevertheless, this problem is real and many of the complaints are legitimate. I would much prefer it that all artists are happy with the existence of GenAI, and we can focus on making it valuable for all of us. That's why I got together with fellow creatives and developed the FAIRCodex, or Framework of Artificial Intelligence Responsibility.

While we have no control over the dataset, we do have control over what we prompt. That's why I consciously refuse to use any trademarks or living artist names when creating with AI. This also forces users to be more creative—instead of just using someone else's style by prompting their work. Check out FAIRCodex to learn how to do this, and feel free to use our FAIR Seal if you do!

How can I get in touch?

Feel free to send me a request via LinkedIn!

— Albert Bozesan