Get from Underestimated to Iconic with Billion Dollar Moves™
Billion Dollar Moves™ with Sarah Chen-Spellings
June 27, 2024

Beyond ChatGPT: AI Insights and Impact w/ OpenAI, Google DeepMind, and Trill Impact

This week’s episode is fresh out of the oven! The Family Office Alliance Stockholm Symposium took place about a week ago.

In one of the panels titled "Purposeful Investing in AI," Sarah Chen-Spellings moderated a discussion on AI and impact investing with an emphasis on women's perspectives. She was joined by esteemed panelists Dorothy Chou from Google DeepMind, Sandro Gianella from OpenAI, and Nina Rawal from Trill Impact Ventures.

In this discussion, we delve into the fascinating world of AI and its impact on various industries. We explore the rise of generative AI like ChatGPT, the importance of identifying AI-centered innovations that are genuinely impactful, and the potential risks associated with AI's rapid growth. We've covered it all.

 

TIMESTAMPS / KEY TAKEAWAYS

0:00 - Intro

5:16 - Where AI is Today: Discussing generative AI such as ChatGPT and more scientific applications.

10:26 - Is This a Hype Cycle?: How are we positioning ourselves as investors?

13:21 - Identifying AI-Centered Innovations: How to spot innovations that are truly grounded in power and committed to purpose.

16:17 - The Extinction Risk with AI Growth: What guardrails and institutions are needed? Comparing OpenAI and Google.

27:12 - Accountability in Investments: Balancing financial returns with meaningful outcomes.

32:51 - API Version of AI: Is it better or worse?

35:06 - Balancing Advancement and Ethics

37:50 - Closing Remarks: Building trust between humans and the digital world, redefining what constitutes a safe or good investment in AI, and discussing the incentives created by investments.

-

Quotables from the event:

 

"As a life science investor, I see the endless opportunities AI offers when it comes to understanding human disease. I would argue that it's unethical to not explore AI in making the untreatable diseases treatable." —Nina Rawal, Trill Impact

 

"There's a lot of fear around AI, but as someone close to the work, I'm not worried. The lesson of the last decade isn't to avoid the tech platforms—it's to structure them the right way with the right people around the table, and that's a constant learning." —Sandro Gianella, OpenAI

 

“Let’s not forget what’s important is also not what gets invested but who gets invested in.” —Dorothy Chou, Google DeepMind 

 

About Family Office Alliance

The Family Office Alliance is established by Principals of Family Offices in Europe and Singapore. We believe that with resilience and common purpose, global challenges can be transformed into uncommon opportunities. It is a bridge of trust for radical collaboration between Family Offices in Europe and Asia.

-- Family Office Alliance

 

The Family Office Alliance Stockholm Symposium | Event Recap

 

-

𝐁𝐢𝐥𝐥𝐢𝐨𝐧 𝐃𝐨𝐥𝐥𝐚𝐫 𝐌𝐨𝐯𝐞𝐬 is THE show for the audacious next-gen leaders.

Unfiltered. Personal. Inspirational.

Tune in to learn from world's foremost funders and founders, and their unicorn journey in the dynamic world of venture and business.

From underestimated to iconic, YOU too, can make #billiondollarmoves — in venture, in business, in life.

 

PODCAST INFO:

Podcast website: https://billiondollarmoves.com

Watch on Youtube: https://tinyurl.com/sarahchenglobal

Join the community: https://sarah-chen.ck.page/billiondollarmoves

 

FOLLOW SARAH:

LinkedIn: https://linkedin.com/in/sarahchenglobal

Instagram: https://instagram.com/sarahchenglobal

Twitter: https://twitter.com/sarahchenglobal

Transcript

Philip Von Wulfen:

Yesterday and today has all been about how powerful and how quick and how dangerous AI is for us humans.

Whether it ends up being a tool that we use to improve the world, or whether we end up being the tools that it uses take over the world. So I think one of things that family offices deal with is, should be dealing with is, how do we give back?

How do we at the end, use our network and our financial ability to improve the world? And obviously AI needs to be used in the same way. So I'm super happy to introduce Sarah, Dorothy, Nina, and Sandro, on a panel of how to invest in a purposeful way in AI.

Sarah Chen-Spellings:

Good morning, ladies and gentlemen. Philip said earlier that this crowd is like a bottle of bubbly. You're all just bubbling when you open the top. So let me get the energy out. We've been listening for a while.

Can you say alliance when I say family office? Just shout it back to me and like, you know, family office. Rara a little bit. Family office. (Crowd: Alliance.) Family office. (Crowd: Alliance.)

All right. So a very good morning. My name is Sarah Chen-Spellings, coming from the United States. I'm the co-founder and managing partner of Beyond The Billion, which was launched as the Billion Dollar Fund for Women. So little hint in the name. We are the world's first and largest global consortium of venture capital funds, GPs and LPs that are investing over a billion dollars into women founded companies.

Of that we have deployed $638 million, and have 15 unicorns that have been invested in by our partner funds from Canva to airwallex. So today, the topic of today, we are a group of family offices and what I believe to be purposeful investors. Now we've had a little bit of a landscape of where AI is, what we should be afraid of, the power of the creator economy, making things faster, better, cheaper.

But the question is, are we doing more and more for nothing? How can we be purposeful in creating impact and thinking about what these efficiencies will actually bring us? And to do so, I'm so excited to bring a power panel, coming from many different places from London, Berlin, and here in Stockholm as well.

Nina Rawal from Trill Impact Ventures. So she's hitting the trillion. Dorothy Chou from Google DeepMind. And of course, Sandro Gianella from Open AI. Round of applause, ladies and gentlemen.

All right. So we've talked a little bit about the power of AI, right? That's been the theme today. Talk to us a little bit about where we are, from the 2017 trigger points. And of course, Google DeepMind actually was launched in 2010. So we've been at this for a while and yet somehow the hype is just so today, right? It feels like pretty recent.

Now these experts have a little bit of a background to say the least. Tell us where you think AI is today. Are we just scratching the surface or what do we have to look forward to?

Dorothy Chou:

If you look at what's been the current manifestation of AI, which is what we're seeing with generative AI and chatbots and image generation, that's one angle of it. But what I'm most excited about, and what I think is more meaningful in some ways, is the scientific application.

So I'll give you a few examples. Google DeepMind was working on a decades long problem called protein folding. And before you all roll your eyes, if you remember during COVID the way that we call the coronavirus, right? And that spike protein was what we were creating all the therapeutics to bind to.

We created a solution to be able to structure those proteins and predict them, which used to take four years of a PhD to structure a single protein. We've figured out how to structure them and released all 200 million known proteins in a single database. That's over a billion years of research condensed into one.

So you're looking at an acceleration in drug discovery that's incredible for all kinds of reasons. And those proteins can also be used for cleaning up plastic waste in the oceans and things like that.

Second example in science is AlphaMissense, which we've used to basically, genetic mutations are all very, very complex, but we've been able to detect whether they're benign or harmful.

And then on the sustainability side, there are things we're doing around weather prediction that are making 10 day weather predictions much more accurate than before, especially in cases of extreme weather, so there's more time for people to prepare. Climate change is coming, it's something we need to be able to do.

And then my latest investment as an angel investor is actually in carbon capture technology, so material science. So there's a lot of really interesting ways that AI can manifest in the world. And I think it's up to the investment community to decide, are you going to just throw money at something that's like another foundation model or a sentence on a computer screen?

Are you going to really dive in to what you want that AI to be able to do for the world?

Sandro Gianella:

Maybe the first thing to say is a little bit to take a deep breath, and take a moment. And I think some of you who have tried this with some of the large language models, they actually give you better responses if you start ChatGPT with a prompt to be like, you know what?

Take a deep breath, take a minute, think about where we are today, where we're going. I think there's a lot of like anxiety and frenzy, both in the investment community with kind of policymakers and regulators that we speak about. So I think one of the things, as companies and labs that are working on this technology and have been working on it for 10 years is also to kind of get the world to sort of calm down just a little bit, and really think about where is it coming from, where is it today, and where is it going.

If you talk to researchers who've been doing work on these language models, and I also know there's kind of some in the room, actually, and other speakers who've been there. The advancement didn't really come as a surprise, the ingredients in terms of the capabilities on the compute side, the ability to train on these large data sets, and kind of the algorithmic breakthroughs that are underlying these current models were there to see.

I think the interesting thing, that we also were surprised, so there's this kind of infamous Slack thread, that a colleague started the day before we released ChatGPT, and it was genuinely seen, and I think that's a misconception. For us, it's a research preview.

The product team were like two or three people because we had 3.5, the model that ChatGPT first ran on, was available through our API. And we built these models, not with a specific use case or industry in mind. Think of it as really basic research, where we're interested in just continuing to push the frontiers and the capabilities models to then have some of these use cases that Dorothy talked about and many others to come after.

But we were sort of wondering, like, maybe it's really still a bit crappy. And like, so forgive my language, and like, why is no one building on top of that? And so we thought, maybe we need an interface. That is just a little bit easier for humanity to grasp what is about to happen. And that was ChatGPT.

And the Slack was like, yeah, we're launching this research preview tomorrow. We're not expecting any fuss. We don't think anyone is really going to notice and to a go to market team. Like, I don't think there'll be any inbounds, you know? And I think that tells you something about, I think a little bit of a mismatch when you speak to the researchers, there is a lot more.

Care and understanding of where the technology is coming from and what the things are that we need to do. And also an investment community to make sure we can continue to scale it in safe ways. And then there's the product craziness that happened, that I think we need to disentangle them a little bit and be as real as we can be about the limitations of the technology today, but also bet on and think about what if the base models and the capabilities and intelligence continues to increase?

And I think that's when you have interesting conversations.

Sarah Chen-Spellings:

Absolutely. And Nina, you come from the investment world. Tell us a little bit about Trill Impact as a house and impact ventures and are you seeing markets?

I mean, I think we just saw the slides early on how much is just being invested in AI. Is this a hype cycle that we're in and how are you leveling yourself as an investor?

Nina Rawal:

So first on Trill Impact, so we are aiming to be Europe's leading impact investor, fully for profit, but everything is aligned around the UN Sustainable Development Goals. So very much for good and for profit at once.

Betting in the ventures strategy on those transformative technologies, you spoke to some of them first of all because we need them as humanity whether it's the next pandemic or you know in relation to climate change. But also because there's some of them will be tomorrow's market winners.

So really looking for that combination of two: solving the world's biggest problems and building strong companies off of that. To the question of the hype, I think we saw some pictures or slides on kind of where we are in the cycle, and I'm not a macro expert, but I'll build on what you said, Sandro.

We get these questions every now and you're like, are there many research companies in life science that I do that don't use AI in one shape or form? I would be a bit worried. I mean, then you can talk about also AI. What is AI?

I have a seven year old and he asks me, but what is actually AI? Is it AI? Is it big data? Is it machine learning? I mean, I think that is also a bit mixed up to many companies, but in one shape or form, I think most science based companies today do leverage it already. And if they don't, they really should, if we put it that way.

And I just want to build a little bit on what you said, Dorothy. I think we're just, In the beginning, in terms of starting to see the translation of what that will do, if we take again the example of drug discovery. I'll take one example we looked at last year, which is in the prediction of sepsis, so bloodstream infection. If you know anyone who's had that, you know it's a touch and go, it's a question of hours, if someone actually lives or dies. So very dramatic in hospital setting.

The way you treat that today is in a very trial and error, empirical approach of taking one antibiotic, see if it sticks or if it doesn't work and you can imagine if it's a question of hours, you give the wrong antibiotic, that is a terrible situation, especially as many of the bugs out there are becoming more and more resistant to antibiotics.

So we looked at various companies that just take large, large data sets. And I usually say, AI was made for life science and healthcare because there is so much information and we cannot digest it in a meaningful way, if we cannot do a brand in 59 minutes, we definitely cannot solve blood sepsis diagnosis in 59 minutes.

And so really looking at that ability to predict a couple of hours earlier. Whether that patient is in the problem zone for bloodstream infection, it's just one simple application of AI. And it's not at all the most advanced one, but obviously a matter of life and death and decision making in a clinical setting.

Sarah Chen-Spellings:

Yeah. To Philip's point here in terms of investors in the room, what are your considerations? I just want to stay with you, Nina, and we'll circle back with the rest.

What are your considerations? I think today, most of us who are getting pitches from entrepreneurs, it almost feels like you're just slapping AI on, right?

You slap AI on, you raise a couple of hundred million more. If you do so, how do you ensure that this is not BS and it's truly grounded in the power and committing to the purpose at the end of it as a means to an end?

Nina Rawal:

Yeah. And I think in our context, we are an impact investor in the VC space.

We always worry about unmet medical needs because that's how, you know, there's a willingness from someone to pay for that innovation. But we specifically focus on underserved patient groups, can be women's health, can be children's health, can be antimicrobial resistance, and it can be kind of accelerating market access outside high income countries.

So those are kind of the lenses we look through. And again, I think, I'm a neuroscientist. I don't come from the data side. I would not even try to challenge the algorithm and the specifics of the algorithm. But I think the AI component, don't slap it on if it doesn't address a real underlying medical question that we are looking to address.

I think that's how we think about it. Is there a logic for this AI component? I saw a founder a couple of weeks ago, it was US founder and she came from an esteemed background. So I was first of all curious why she was there. I'm trying to pitch this to me in particular, she talked about she was in the nutrition space, basically selling kind of healthy food to mothers and children.

And then she started talking about AI and it made me so curious, like what does AI have to do with kind of just eating healthy food as a mother and your children? I think that is not super convincing but there are so fascinating applications of AI.

Give me one more to the point of children and mothers. We looked deeply into the autism spectrum disorder space last year. And I'm not talking about neurodiversity here. I'm talking about severe autism, where children cannot leave or grown-ups cannot live independent lives. And that is a space where, again, as a neuroscientist, I think we don't understand the basics of autism spectrum disorder.

We know information, but we really don't understand the basics of autism, underlying disease, kind of separating large patient groups into smaller disease groups where they all have the underlying mutations and so on. So leveraging that AI opportunity to really go after the patient. Basically, the whole literature, medical literature, looking at very innovative ways of thinking about that.

Other signs, the clinical signs and symptoms. It so happens that it seems like for a certain type of autism spectrum disorder, there's a different length to your pinky finger, if I remember correctly. Some of these things that you wouldn't really think about, but when you pull up all those data sets together, you can find interesting ways of identifying patients in a different way, making sure that they're subgroup in the right way and entering the right clinical trials.

I think it's just mind blowing, what we can do for, use AI the right way.

Sarah Chen-Spellings:

Yeah, so mind blowing indeed. But of course Dorothy and Sandro, we were chatting outside, in the beautiful Stockholm sun here about the fact that even OpenAI and Google DeepMind who are experts in this, did not expect every application of which AI today claims to be able to address.

So you've almost created it as you were saying, not vertically specifically, but horizontally and seeing what sticks. And of course, in 2023, there was that famous open letter by Sam Altman signed by Elon Musk, Shane Leggevin, about the dangers, a warning for the extinction risk with the growth of AI.

Have we addressed those concerns today in 2024? And are you concerned about just how far reaching this is with everything that Nina has specifically laid out?

Sandro Gianella:

Maybe I'll take just one sentence on your first question on kind of the BS in the investment thesis. I think one of the additional things we think about with the OpenAI Fund, that our colleague Ian runs, is also thinking about where is there a group of people that sort of has almost an unfair advantage in understanding a particular process, a particular problem, and really has thought through that and is annoyed by a specific thing that they're trying to solve and then come in and actually deeply think about how can AI help.

One example I give in the non-medical space, a company called Harvey that we invested in. They're building sort of an AI for legal professionals, and they actually went out and built a custom model together with our engineers based on U. S. case law, and they sat down legal partners, well paid legal partners, to do the human feedback, like human loop feedback iteration that you do on these models, to train them very specifically.

So I think that's the thing we're excited about is learning about from people that are really deep in one of the problem issues and then come to us because they challenge where the technology isn't yet good enough, where they're like we can't have this thing hallucinate on legal cases and is it actually possible?

And are your researchers interested in helping us solve that problem? And so that's also why I wanted to mention it to bridge the gap to your point around kind of how worried are we? About safety, and are we addressing it enough?

I think there's two things I would say about that, one, I feel like there's often a false dichotomy between we either advance the capabilities of these models or we kind of stop and keep them safe. Actually, I think the more you speak to researchers working on them, the more they think that actually making them more capable and better understanding the interpretability of these models and really our friends at Entropic have recently done interesting work on that. We've released some of it, DeepMind is working on it, like all of the labs on that are trying to better understand this technology.

And I think by actually making them more capable, I also think that the technology will be safer. Now of course, to the point you've mentioned, I think if you are sitting there and you see this, how far the scaling laws have gotten you so far, I think the debate internally within those labs also is about, wait a minute, how do we release this technology in a responsible way?

What are the guardrails and institutions that are needed? What are the socioeconomic implications of this technology? And that's what I spend most of my days talking to governments and heads of state and regulators because again, we feel like there is a responsibility for decision makers to have a sense of where this technology is going and then to think about which institutions should be built on that.

You might've heard that many governments across the world are now putting together so called AI safety institutes, where they're doing kind of the type of red teaming and testing of these models that we do in house, but governments feel like they have kind of a responsibility to their people to also better understand kind of what the pitfalls are.

But I feel wide optimistic at the moment, about just the amount of research and the amount of thinking that is going in, not just in pushing capabilities, but at the same time, thinking about how to make that safe and where the guardrails are. As an example with Sora or even the voice model. Again, it was something where we decided we wanted the world to know that these capabilities exist and that these models can now generate video in the way that you've seen, but we decided not to release it as a product.

And we're having conversations with filmmakers, creatives, artists, to start to think about actually what is the right way to introduce that technology to the world? How can we do that in a responsible way? And I think for obvious reasons, the public debate is sort of over here, and it sometimes feels like the real world that really talented colleagues are doing feels a bit more grounded.

And that's why I think the closer you get to the labs that are thinking through this, the more I feel optimistic that we can tackle that.

Sarah Chen-Spellings:

Yeah, thank you. And of course, talking about pushing capabilities. I've heard you. Those of you who know me, I tend to make things a little spicy, so we have contradictory views here.

Dorothy, on a recent interview, you spoke about how you thought that, perhaps OpenAI was a bit too soon in their release without the guardrails. What do you say to that?

Dorothy Chou:

Look, I will also say there's a difference between working at a company that is owned by a big tech firm like Google. OpenAI can take risks that we can't take for all kinds of reasons. And that's part of like the life cycle of different types of startups and different organizations and the trade offs you make.

And Sandro and I have known each other about 20 years. So we talk about these trade offs all the time. I do think it's really important what Sandro was saying. And part of the reason why we work in policy, by the way, isn't because we're just setting guardrails all the time. That would be terribly boring.

When you look at, for example, other tech ecosystems and how policy has to be just as creative as technology is, you think about when we were in school, we were using like Napster and Kazaa and that was one type of music streaming service at the time and then the fundamental thing that changed when everybody moved to iTunes. Sure there was like the iPod and things like that but the digital rights management system of how the music industry started working with the tech industry that fundamentally shifted consumer behavior and then you saw it shift again when Spotify happened, right?

There's a licensing change that is underpinning everything. And so a lot of what Sandro and I spent our time thinking about with the internal teams is how can you incentivize better quality, products and better behavior with policy change. And so a lot of the research that we do at Google DeepMind that Sandro alluded to, for example, we basically did a taxonomy of all the potential harms from language models and we held back our first language model paper until we could release it both at the same time.

So you build that awareness into the system. We've also done research on decolonial AI and the influences of most of these labs and services being based in the global North. How could they possibly take into account the global South and their perspectives and perspectives of people who are, might not be represented adequately at these companies who think a lot about that.

There's also fairness when it comes to, we can look at each other and think about bias. But what about fairness with unobservable characteristics like sexuality, for example? So this is research that is ongoing inside these firms. And the reason we do that is because we think this kind of innovation and how we think about policy can be just as important as building the capabilities themselves.

Sarah Chen-Spellings:

Yeah. So talking about that, Sandro, and I'll come back to you, Nina, in a short second.

The interplay of innovation and regulation, is a hot topic these days. Sam Altman himself has actually said that he welcomes regulation and, there's suspicions and regulatory capture and weeding out the competitors, but I think it was David who showed the slide earlier about the black Vikings and talking about work culture, bias, all these things, where do you think morality stands?

How do you decide on who sets the rule of the game, being as powerful as you are today in OpenAI?

Sandro Gianella:

Yeah, I think maybe this alludes a little bit to the question you had around kind of how do you release, when do you release, and what do you pair it with, and what do you intend to spark in terms of a conversation when you do that?

I do think this is where we have a sense that having more people use this technology and hence understand what it's good at, but almost as importantly, understand what its limitations are, come to us with the biases, with the things, with the hallucinations, with the things that the models are not good at, because I think that's just important, as important to understand.

And then I think have a conversation that feels more grounded also with regulators and policymakers to be like, okay, well, given where we are today and where we think the technology is going, what are the guardrails that we can set to address the real harms and problems that exist today with existing legislation that often exists?

And then just as much where, if this technology really is going to continue, and we're going to have PhD level capabilities across most domains. Clearly that poses economic and social questions to our societies where governments and decision makers should have a say in that that is the core role.

That is what they are elected to do. That is what they feel they have the responsibility to do. So I think that interplay actually is needed and is important. And an example I give on the question you asked about who gets to decide, like what should these, what should the values be that these models have?

We, a couple of weeks ago released something that sounds a little technical, but I actually think is quite interesting called the model spec. And so this is kind of our version of the values that we want the model to have and the sort of goalposts of if they're kind of competing objectives, what are the answers the model should give?

An example is, if someone asks questions around mental health diseases or it's clear that they're looking for help. One of the easy things if you just want to get rid of the liability is just have the model refuse. The famous, like, I'm a large language model. Don't come to me with this stuff.

Is that the best way for the models to behave? Should we find a way for the models to be capable enough to detect that this is a moment to be empathetic. This is a moment to maybe point users to information that is more authoritative.

And the reason we released it is to have that public debate about it because I think the difference from technologies that passed and the way companies have behaved to now is you really sense that sort of discomfort is maybe too strong of a word, but a sense of the responsibility that is there and almost begging to your point about Sam and us saying like.

We need this conversation. We need all of you to wake up to the conversation and we need you to help us make the right decisions. And so that but that is it's often not easy. It's hard work to get people to have a conversation about the right things at the right time. And you can't always control the politics of it all and the geopolitical situation.

So I think that's what we're hoping and trying. What our teams are doing is to navigate that conversation in a way that leads to an outcome where we do more to control the harms that these technologies could expose.

Sarah Chen-Spellings:

Absolutely. So Nina, that brings us very nicely to you. And then we'll open up for one or two questions.

As an investor and speaking to the investors in the room, how do we hold our investments accountable and drawing capital, not only just for financial returns, but meaningful outcomes.

The power of the check, right? I think that's what we're all here to do, to talk about the power of your check and the vote that you have in your hands for the future. I mean, let's not forget how privileged we all are to be here in Stockholm with the wealth that we have to really make an impact.

Nina, your thoughts on holding ourselves as investors accountable, the potential harms of AI and how to be purposeful in our investments with AI coming as a wave.

Nina Rawal:

Building on what Sandro said before, I see AI as an enabler of solving those big problems that we as humanity face. I say that because, of course, that is the lens we invest through. I mean, we saw some of the previous presentations. You can do AI for anything, and I don't know how to speak to that because I don't understand that.

But I think through the lens of improving medical outcomes for people, I think it really the question really starts for us around what is, again, like the next pandemic, we know it will happen. We don't know what it will look like. And we are hearing over the past weeks that the world cannot unite around a new pandemic treaty and so on.

So we know what some of these very big problems that face humanity are. And that is really the goal we're trying to drive towards purpose, improving human health. And then using AI towards that benefit. I think it has to be in that order. And we've had so many conversations internally around kind of AI.

And we get these questions, but what if this is early stage investing? What if the companies pivot and decide to do something fundamentally different? And I guess that's kind of our analogy, what you're saying you release something to the world and you don't know what the world will do with that.

As long as we are there in the boardroom, we are a so called Article 9 fund, we will do everything in our power to say that, that is not the goal here. And maybe this has been said earlier in the meeting, the way I sit with it is that there's no doubt that there's some really terrible things that can happen with regards to investing in companies and giving them capital and the wrong things happen.

But I think the net opportunity is huge, with regards to thinking about how it can help us understand the human body and pathology, and how we address human disease.

Sarah Chen-Spellings:

So tangibly do you actually like have a process for, when you think about all these new technologies, ethical considerations?

Nina Rawal:

Absolutely. It's part of our due diligence process. We will go out, speak to experts and understand what could possibly be the ethical dilemmas around this.

This is not AI, but I'll draw as an analogy. We look very closely at the gene editing space, which is one of my favorite spaces because it is so transformational. I think you've heard the stories of the blind people, children, they can see again, deaf children that can hear again.

I mean it's biblical in the proportions of what gene editing can do. And there it's the same as with AI, very thorough conversations around what are the ethical considerations. I think the beauty, still, I know this can maybe sound one dimensional. But the beauty in the health space, medical space, regulated spaces, of course, that there are regulatory authorities and you spoke to how they have to be up to speed.

But I mean, if you're playing within the regulated markets, you cannot just launch things without these considerations also have been being considered.

Dorothy Chou:

Yeah, I would agree with you on it. So I spend a lot of time investing in undervalued markets with overlooked founders, and I think it's really important because Sandro and I've been both spent time in Silicon Valley and in Europe, and the vast majority of founders that are getting funding in a I are very one dimensional.

And it's a huge question and a problem about how we're thinking about what problems AI is being applied to solve first, to your point, because the truth is, like, usually what happens in Silicon Valley, where I spend a lot of time, is you just go the path of least resistance, which is going to end up in, like, personalized soda drinks before you get to anything in health or climate.

And so, the regulation is really helpful and important, but also for some investors feels like a barrier. I was just spending time at the Karolinska Institute yesterday. For example, how do we really think about directing investment and with policy help for updating some of these regs in regulated areas to facilitate the level of innovation to see?

Because I think most parents in developed countries are saying that they're afraid that their kids will be worse off than they are in many ways. And so AI can be helpful, but it takes actual work from a policy perspective, from an investing perspective to direct. The funding and the social change and the political change that's necessary for that.

Sandro Gianella:

I think just to say that I think this is particularly important in the next one, two, maybe three years, because I think we also all realize that the way the world is going to react, accept, and provide all of these companies and institutions that are pushing the frontier what I call sort of a social license not just to operate but to continue to advance.

I think is in large part going to depend whether we can tangibly show that real problems are being solved or that money is flowing into communities that can now solve problems that they previously couldn't. And I worry like Dorothy about the same thing where, I think it is going into kind of the usual places.

And I'm not saying it shouldn't go there and that's fine if you're solving that but I really hope with the advances of this technology, and what we've heard earlier today as well, is that it's going into really purposeful problems that we know are urgent and that we know need solving and that we're doing it in a way that advances the capabilities of this technology to push those that are developing it into, to really think about different things than they used to think before in the valley.

Sarah Chen-Spellings:

We'll come back. I see a question in the audience.

Sandro Gianella:

It's a good question. I think reality is we do both as well. Like we have, our models are available through an API, and people are building all sorts of things on top of it. But sort of for better or worse, ChatGPT and the specific way in which it allowed humans to interact with this technology has sort of sparked the imagination of a lot of people.

And I think that's to me, both good and slightly worrying on the regulatory front and I'll explain why. I think it's good because it did feel like a wakeup call, leading to conversations at the political and regulatory level that we hoped would happen and that we are then trying to steer.

So they're coming at the basis of where this technology is at. I think the slight worry I have on it is that it has people think through that one application as the thing that this technology was created for, which is to me, absolutely not the case. I think I continue to view it as a research preview and as a way also with the multimodal things we're doing with 4.0 and the voice mode that we've released a couple of weeks ago, not necessarily as a finished product but as a glimpse into what this technology can help solve.

And I think it is good that there are labs that are doing that on the scientific frontier. Other labs are doing that kind of with more human to computer interaction and have people start to think maybe there's a way we stare less at our screens.

That's my personal hope with an 11 and a 13 year old. I allow that because obviously. They're like, yeah, you're not going to stop us from using ChatGPT for homework, are you? And so the deal I have with them is they can only use it in voicemail.

So I have like an old phone, I have it in voice only and she's sitting there doing her math homework and she's like, hey, I'm having trouble understanding this concept. Can you explain it to me? She is, you know, rehearsing her things and giving feedback, getting feedback on it. So yeah, I think both is good.

I think this technology, we wrote this paper called GPTs are GPTs, a little bit tongue in cheek in terms of general purpose technologies, to look at the economic impact mostly, but I think it's good that there are so many different approaches and that we have both individuals and institutions and scientists grapple with this technology and where it's going.

Sarah Chen-Spellings:

Yeah, we have a second question.

Alex:

Where's the balance between kind of, how do you handle unrestricted AI training? Like it would be highly unethical. And kind of set the guardrails. And I guess it's kind of directed more on this side of the panel because you have, you know, the big Google who kind of has to be ethical and kind of stay within that, but still you want to be in the forefront.

And then you have open AI, which is still fairly unproven in the big scale of things, but you can be more agile. How does that kind of balance between you guys?

Dorothy Chou:

I would say that a lot of what we think about as, I think like an elder sibling in this space. I've been at Google DeepMind for seven years by the way, and it's changed every single year. A lot of these behaviors when you work in emerging technologies is the normative behaviors are set by the first movers.

So we think about staying at the forefront of the technology so that we can set those terms. And we also work really closely with OpenAI, Anthropic and others just to think through what are the normative behaviors we want to encourage? And also, frankly, how we think about regulation down the road. I think you are thinking through like proto regulatory states of what we need to share with governments.

Now, so they understand what's happening. I mean, Sandro and I spend a lot of time with people who literally, can't tell you how the internet works. So we have to think through how we get some of this knowledge transferred into government. How we think about helping them to set some of these terms that in ways that are meaningful.

If you're familiar with the way they're setting these terms now, you have to report things if you are training on 10 to the 26 flops. Which is actually, if you know the technology well, is not really meaningful when everything is becoming more efficient. So how do you set standards that are actually going to stand the test of time and be future proof while also adjusting to the increasing capabilities that are happening?

One of the best examples we have is, if you're familiar with security, software that we release is always going to have bugs. And so what the security community started doing was this norm called Responsible Disclosure, which means that they pay hackers to find problems with their technology and they release it widely.

And then they give them compensation based on how high quality the bug is that they find. And they report it to the company within 90 days so that they can patch it before they disclose that problem to consumers. That's a huge norm that was developed before regulation took place. And so, a lot of what Sandro and I think about is, what are some of those things that can apply in this space?

The problem is with AI, some of the problems that you find might require three more years of research to solve. So how do you think through what is responsible to release in that context? What applies there based on what we've learned before? And then, how do we set up a system that, like in the security community, is agile enough to keep moving with the technology?

Sarah Chen-Spellings:

Yeah, and we're short on time here, so words of wisdom, final parting words from each of you for purposeful investing in A. I.

Nina Rawal:

So I'm gonna again take it from the user perspective. My worry is actually more than neuroscientists thinking about children interacting in a digital first world and how you build trust with the digital world and between humans, that's actually what scares me, like the deep fake thing.

But if I think about it in terms of purposeful investing, I'll just leave with the parting thought of saying, just like with many groundbreaking technologies, I would argue that it's massively unethical to not apply them for the benefit of humankind and improving medical outcomes for people.

So there's also very much that flip side of not using AI in a purposeful way.

Dorothy Chou:

I'll just leave you with an anecdote which is I think we have to redefine what a safe or good investment in AI actually looks like. I have a good friend who was one of the most technically competent women at DeepMind.

There's very few of those female manager. Top of her class, everything. She left to build an AI company that was going to really work on mental health. She spent a year trying to raise, she called me around Christmas saying I've talked to hundreds of VC firms and none of them will invest.

She had a proven product that people are paying for. A lot of my male counterparts who have like just words on a screen are raising millions of dollars. And so I think we really have to challenge that idea of what a good and safe investment in AI looks like in order to get.

Sandro Gianella:

I think the thing I would say is think about the incentives you're creating as well, both with the investments and I think the same speaks for regulatory issues. So as an example, like red teaming our models and stress testing them, is one of the things that the industry did without being forced to do it because it felt like the responsible and the right thing to do.

But as these technologies become better and better, it's actually a lot harder to find experts in the specific domains. So one of the things we think through is like, how can we get regulation and legislation to actually incentivize this space as an example for more investment.

And so also there's a lot of the things that we think about is the interplay between sort of using regulation as a nudge to then make things a good investment because a market will be created.

And I think continuing to do really good evaluations and stress testing and third party testing of these models, I think is something that is going to be incredibly important, but also creates the right incentives for us as firms to be held accountable for the issues that you talk about for consumers.

And so, yeah, think about the incentives you're creating by writing checks.

Sarah Chen-Spellings:

Thank you. And with that, ladies and gentlemen, what a powerful panel.

It's important to remember the role that you all play and round of applause. They'll be here for drinks. So more questions. I'm sure you'll be taking them as we go.

Thank you so much.

 

Dorothy Chou Profile Photo

Dorothy Chou

Director of Policy & Public Engagement of Google DeepMind

Dorothy leads Policy & Public Engagement at DeepMind, an artificial intelligence company. She has spent her career building social justice, ethics, & accountability structures at technology companies, including the first Transparency Report-an industry standard that more than 70 technology companies use to show how laws and corporate policies affect free expression and privacy online. Prior to DeepMind, Dorothy was responsible for policy development at Uber on consumer protection, safety & self-driving cars. She also led corporate communications at Dropbox, and worked in communications and public policy for seven years at Google. Outside of work, she is working toward a Master’s in Bioethics at the University of Oxford, serves on the development board of the Young Vic, and is an angel investor with Atomico, a leading European venture capital firm.

Sandro Gianella Profile Photo

Sandro Gianella

Head of Europe & Middle East Policy & Partnerships at OpenAI

Sandro Gianella leads OpenAI’s Policy & Partnerships efforts in Europe.

He’s previously built the EMEA Public Policy function at Stripe, worked on technology regulation at Google and focused on the G7/G20 process as a lead analyst for the G20 Research Group and the Heinrich Böll Foundation. He’s a member of the Atomico Expert Network providing counsel on regulatory matters to European scale-ups. He was selected as a Young Leader for the Atlantik-Brücke e.V., took part in the Transatlantic Digital Fellowship of the Global Public Policy Institute and participated in the Internet Leadership Academy of the Oxford Internet Institute. He graduated from the University of Toronto with an Hon. Bachelor in International Relations and from the Hertie School of Governance with a Masters in Public Policy.

Nina Rawal Profile Photo

Nina Rawal

Partner and Co-Head at Trill Impact Ventures

Dr. Nina Rawal is Partner and Co-Head of Trill Impact Ventures, Europe's leading impact investing house. She previously headed the life science investment team at Industrifonden, a USD 800m VC fund. Previous experience also includes Boston Consulting Group in Stockholm and New York, and VP Strategy and Ventures at Gambro (Baxter Group). She serves on the boards of Cinclus Pharma and Stockholms Sjukhem, a non-profit hospital organization.
Nina holds a MSc in Biomedicine and a PhD in Molecular Neurobiology, both from the Karolinska Institute with research work done at Columbia University and Hopital la Salpetriere. Recognition for her work includes the selection as a WEF Young Global Leader and a '40 under 40 - European Young Leader.