The 11VEN Q&A: Extraordinary AI founder Devrin Carlson-Smith
- Jul 11, 2024
- 14 min read
May 20, 2024
Devrin Carlson-Smith is the type of deep thinker and doer we need in today’s world. As the founder of Extraordinary AI, a strategic advisory helping business leaders who seek to expand their AI literacy, strategy, use-case development and deployment, he spends his days and nights thinking about the emerging technology that’s poised to revolutionize, well, everything.
In a wide-ranging conversation, Devrin and 11VEN's San Rahi discuss why AI is “the largest change we’ve experienced in our lifetimes,” the rise of five-person, billion-dollar companies, what, exactly, we mean when we say “AI,” and much more.
You can reach Devrin and the Extraordinary AI team here.
11VEN: There are two philosophies around business: There’s a notion that business exists to make profit, and there's an idea that business makes profit to exist. And if you understand the latter paradigm, the obvious second question is, then why does the business exist? What is its higher function? I firmly believe that businesses that understand that question are the ones that will have longevity. Let’s start with that frame and ask you ‘why does AI exist?’
Devrin: I like that you brought up the idea of capital direction for business. I read an interesting book by John Mackey, Conscious Capitalism. His role in changing the paradigm at Whole Foods really stuck with me—businesses should have a higher purpose beyond just maximizing profits, to “elevate humanity” and make a positive impact on the world and all its stakeholders. In the stage I'm in with my career, I wanted to find more conscious capitalism organizations that had a purpose beyond Wall Street projections. I wanted to see how I could bring my expertise to the market, especially with the democratization of AI and technology, a tide lifting all boats.
I found organizations that are now leaning into the idea that the grand challenges they set out to solve can be fulfilled. I see a resurgence in folks getting inspired by why they started a business in the first place. There’s an urgency that's happening in the marketplace today. We’re now receiving the benefits and the possible upside of a huge technological breakthrough that brings with it the hope of increased resources, increased computation, increased analysis, and increased directional improvements. Effectively technology that by its nature is dramatically expanding the resource pool necessary to achieve grand results.
What do you mean by directional improvements?
AI is not a new concept. It's been around for decades. But we've actually achieved a moment in history. I call it the ChatGPT moment in November 2022. All of the research that was shrouded behind research teams and large technology companies became “Consumer exposed and available.” The benefits are just starting to become clear and we are moving from a state of knowledge scarcity to knowledge abundance. Organizations can now start harnessing the huge potential of where we are heading. That superpower is reinvigorating some of the organizational leaders when it comes to taking on grand challenges, and to be more ambitious with their “purpose” in business.
AI is going to be the largest change that we've ever experienced in our lifetimes. It has the potential to invigorate and bring with it some huge upsides, if we harness it correctly. We need to go forward with our eyes wide open, knowing full well what it's capable of in a positive and a negative context.
You and I met during our agency lives. I think right now, the term “AI'' is being used in the same way that the lazy advertising terms “social” and “digital” were before—as a general hold-all. Would you agree with that? How would you unpack all of what AI is and means today?
Having been in the tech world for around 25 years, we've seen some of these innovation waves coming through. I think AI is bigger than all of the previous waves—combined. The internet gave us the browser. Smartphones gave us apps and the community that came with them, cloud based supercomputers have given processing speed, and social gave us reach and proximity.
With artificial intelligence, I think we are embarking on something that exceeds the biggest technological advances we've seen in the last 20 to 50 years. It's worth noting that we're very, very early on in what's happening. We are just getting started, and the AI tools we are using today are the least capable they will ever be. So many of the research leads are enthusiastic about how far we've come, but also, more importantly, how far we have to go. We still have the opportunity to shape where it's going and direct it for good.
I believe that AI is going to be the largest change that we've ever experienced in our lifetimes. It has the potential to invigorate and bring with it some huge upsides, if we harness it correctly. We need to go forward with our eyes wide open, knowing full well what it's capable of in a positive and a negative context. This is unlike any technological innovation we've ever seen. We have released tools in the marketplace before, but this is not a tool. AI is general purpose technology, which means that its applicability is limited only by your imagination. We're moving into an era with technology that has far more independence, far more capability, and far fewer limitations.
This is not purpose-built software to do a specific thing. It is far more open than that. They said that technology could never be creative. But there's a never-ending stream of creativity that's coming out of AI. They said technology couldn't really be empathetic. And yet there are millions of people using AI to have conversations about their condition and seeking advice.
It can now drive cars, manage energy grids, provide access to free health care, and invent new molecules.
The value of these decade-old data repositories can be unlocked because things that would have taken 1000 years of man-hours can be done quickly, effectively, and with less effort than ever before. CEOs and organization leaders can hunt for patterns inside their data, test and confirm hypotheses, and move faster towards their goal. These are eye-opening moments.
Why start Extraordinary AI? What is its higher function?
Early on, I started seeing gaps in knowledge across the business and nonprofit communities. The understanding, the education, and the literacy associated with artificial intelligence is very diverse and separated. I thought it would be helpful if everyone had a 100-level understanding of what we're talking about, getting people to a level-set, a point where they could understand the difference between a large language model and an application for a use case.
But I also wanted to answer the bigger question, which is as a business owner, once I have this knowledge, how do I harness AI in my business? How can I look at my existing business practices and overlay the best parts of AI to create purpose-built directional paths forward? How do I apply a methodology and a process to ensure I can be successful? I started Extraordinary AI because while I saw lots of dialog around the tools, I didn’t see any complete initiatives that addressed the how.
In addition to developing AI literacy, education, and training, I started Extraordinary AI to help build a roadmap and a blueprint of the essential steps and tools necessary to build inertia and get started. It’s important to take advantage of the models, tools, and the technologies available, as well as looking at ways to start experimenting, learning, and implementing strategies that will hit the particular goals in your business. Starting with your business goals and working backwards is better than chasing AI tools and trying to find a problem to fix.
Have you had moments where you’ve seen this all come together?
We’ve been talking about big data forever. For years, everyone has been storing their information. All of a sudden, AI has allowed organization leaders to say, ‘Wow, I'm sitting on a treasure trove of data.’ The value of these decade-old data repositories can be unlocked because things that would have taken 1000 years of man-hours can be done quickly, effectively, and with less effort than ever before. CEOs and organization leaders can hunt for patterns inside their data, test and confirm hypotheses, and move faster towards their goal. These are eye-opening moments.
The other side is that it’s exciting seeing business leaders move into positions of authority. Someone who has been influential in my thinking Mustafa Suleyman, who is now Microsoft AI CEO. He has dedicated his life-work to AI and knows that AI is much bigger than just a tool, it’s a transformational moment in history.
AI is something that can process more data almost instantly than any one of us can consume in a 1000 lifetimes. It is accelerating scientific discovery, addressing healthcare and the climate crisis. But more than that, Suleyman is talking about something that has human elements, something that has creativity and empathy. We're moving past the objective of building a technology with perfect IQ and EQ into what he calls perfect AQ, or Action Quotient—being able to perform actions on your behalf. A new digital species, if you will.
Suleyman is talking about a world where AI copilots and collaborators are ubiquitous. Everyone can have a personalized tutor in their pocket that has access to all the knowledge on the open internet. That means access to low-cost medical advice, an all-knowing strategist, a legal expert, a travel guide, whatever you need. This is where we're moving. For the first time, we’re looking at humanity and technology completely intertwined.
In the next five to 10 years, we're going to see more five-person, billion-dollar companies that slot AI agents into their org charts to perform specific tasks that they get trained to do, than ever before.
How do you see AI helping to solve some of the bigger challenges society is facing?
Scientists and researchers can use massive amounts of data and AI models to predict what's going to happen, to test hypotheses, model outcomes, and identify solution paths themselves. With the help of AI, we are on the precipice of massive scientific discovery due to the sheer horsepower we have recently introduced.
I'll give you an example. In 2020, the groundbreaking AlphaFold project came out. It’s an AI system developed by Google DeepMind that can predict the 3D structures of proteins from the amino acid sequence. Figuring out the exact structure of a protein can take years and cost hundreds of thousands of dollars using experimental methods. The AlphaFold project shocked the scientific world by accurately predicting the long-standing protein problem.
Since then, the AlphaFold code base and database of over 200 million predicted protein structures has been made freely available. Scientists worldwide are now using it to accelerate research on everything from developing new medicines to designing more environmentally-friendly materials.
This gets me excited as this concept of knowledge-abundance becomes shared and open—for other grand challenges to apply similar learnings to their objectives: climate change, healthcare, universal basic income, mental illness, anything to do with scientific discovery really. All of them use large amounts of data and models that are readily available to try and see if they can crack the code and discover new patterns based on data.
There is no future going forward that does not include technology. It is here, whether we like it or not. It's always going to be with us, and we have an obligation to help steer this in the right direction for humanity.
One of the things that excites me is the opportunity to unleash creativity across the planet. There’s a potential to move towards community, towards smaller tribes, towards smaller businesses that have huge impacts.
The macro challenges are going to be solved using data and AI. That’s one direction. But there’s potential at the grassroots and small business level, too. Creativity is one. The space that's getting disrupted, first and foremost, is the knowledge worker. We’re moving from a stage where knowledge was siloed—where a CPA or accountant knew how to do taxes, a lawyer managed contracts, an architect understood construction physics, and so on—to a stage where there’s knowledge abundance. If you want to get some information about techniques to reduce methane gas across cattle farms and beef production, or how to reduce the salinity of an ecological critical lake, architectural design, creativity, or food dynamics, you can. That is no longer trapped within the expertise of specialists. How will the world’s focus shift from attempting to acquire the knowledge to be able to make decisions to an environment where the expertise is infinitely available to be able to interpret and action the information?
The challenge becomes less about hoarding knowledge and more about democratizing it, then asking how we use it to solve a problem. There will be opportunities for people to create small teams of very powerful thinkers and strategists who use the knowledge abundance at their fingertips. In the next five to 10 years, we're going to see more five-person, billion-dollar companies that slot AI agents into their org charts to perform specific tasks that they get trained to do, than ever before.
In a way, ML and AI are inherently resilient systems. They understand weakness, and they iterate around it, and they find ways to continue to thrive. Our belief at 11VEN is that resilience is a very local phenomenon. So, two-part question: how can humans learn from the resilience of AI? And how can AI make our human communities more resilient?
I'm hopeful that as we are building AI systems, they're built with humanity inextricably intertwined and as such are incredibly resilient. As we begin to normalize AI, you will begin to hear more about AI PLUS human, which I think is the right formula. AI is more than just data and processors. To suggest that AI is merely math and code is similar to suggesting that humans are just carbon and water.
Humans can learn from how versatile AI is becoming. AI can communicate in our language. They can see what we see. They can consume unimaginable amounts of data. They have memory, personality, and creativity. What we're creating is not tools. We're creating, effectively, the amalgamation of all of human knowledge into one system. By its nature, that system should also inherit the resilience that humans have. I like to think of these things as extensions of ourselves, rather than a tool that's at arm's length. The more we can get our heads around how these things can be companions and colleagues, the more resilience we as people should learn as a result of how they accommodate, make change, and self correct. I also think AI can make our communities better problem solvers. Again with the ubiquity of knowledge, humans and their communities no longer have an information gap, and should be able to ask and resolve tougher questions
You just gave AI a pronoun: they. That’s what everyone's scared about, the sentient opportunity, right? What do you say to people who are scared in that way?
We have an obligation to help steer what's in front of us. Nothing is written in stone yet, and we are at the earliest stages of crafting the guardrails. We're dealing with limitless abundance here. We've never had to make the decision of do we have a kill switch? But there is no future going forward that does not include technology. It is here, whether we like it or not. It's always going to be with us, and we have an obligation to help steer this in the right direction for humanity. Looking at who is running these organizations, I can't think of anybody who is not backing safety and ethics as being a direction forward. Even open-source models—they can self correct, and we can be in a safer spot without a paralyzing concern of singularity or sentience. That said, on a scale of 1-10, I am a 10 on both the exciting side of AI possibilities and a 10 on the potential for harm. We cannot be asleep at the wheel and need to be proactive to oversee the threat from bad actors, governments, and organizations.
The vertical industry solutions that become empowered are just beginning—from healthcare, to manufacturing, retail, food production, education, medicine, research, marketing, and real estate. The key to this and the first starting block is AI literacy for all.
At Time 100, there was a panel with Eric Schmidt and Yoshua Bengio. On the face of it, they had two diametrically opposed opinions. Schmidt was optimistic: it’s all going to be fine and businesses are going to behave responsibly. Bengio said basically we need a kill switch because humans will not be responsible. What do you feel about those two viewpoints?
Marc Andreessen put out a paper called The Techno-Optimist Manifesto, which is on the far extreme of arguing for pushing the boundaries at breakneck speed: growth at all costs, no concern, market self-correction, and so forth. That's one end of the spectrum.
I don't consider myself a techno optimist. I don't believe the right path is to completely ignore the risks associated. I would call myself a techno-realist, very enthusiastic about the technology, but I'm certainly focused on both the ethics and the safety aspects of how these get developed and by whom. I wouldn’t put myself in the e/acc world of people who believe in effective accelerationism at all costs—this is worrying to me.
The other end of the spectrum is the doomsayers who argue we are creating something that is going to annihilate mankind. I also don’t gravitate towards this argument, either. I am enthusiastic about the future but very cautious about potential misuse.
The regulation question is a big one. My belief is that the horse has left the barn with the release of the open-source models. Models like Llama 3 and Mistral are not going to be able to be regulated the same way that the closed models will be. It’s going to be really hard to hold standards to all models as they exist today. The people who work on these models will tell you that they will stay safe, but right now, we cannot take their word exclusively on this. We need to make sure the steps are made to have regulation to control and continue to monitor and evolve. There needs to be some level of oversight and we are seeing now bodies of the leading frontier model companies coming together to advise and set universal standards, even though they will be challenging to enforce in open-source LLMs.
If on one end of the scale, AI becomes a Bond villain, and on the other end AI becomes spam email, where do you see ending up? What’s the arc of AI?
I'm trying to predict one year out, let alone five [laughs]. It's going to be really interesting for my kids and the next generation of leaders who have AI and its permutations at their disposal. I believe we're going to find happy mediums where AI is going to solve business problems that are efficiency- or product-profit based. That’s one avenue.
It’s also going to supercharge grand challenge-type of thinking, allowing someone to have thousands of researchers pouring over information. We're seeing this exponential curve of more processing power and greater speeds. We're improving by 10x every year, if not 100x and that is not looking like it's going to slow down anytime soon. In the last two years, we've gone from hundreds of millions of parameters to billions of parameters to 10s of trillions of parameters. The vertical industry solutions that become empowered are just beginning—from healthcare, to manufacturing, retail, food production, education, medicine, research, marketing, and real estate. The key to this and the first starting block is AI literacy for all.
These are large language models but they are predominantly in a couple of languages. If you look at Africa, one of the most dynamic, youthful, entrepreneurial, and energized communities in the world, many of their languages are not represented in AI, including some that are not written down. Allowing for the fact that we're not going to have just and fair governance on a global scale, how do we reach the ambition that you just described but also consider the notion of equity?
I think the language piece will be solved. These language models are multimodal, which means that if you can speak into a model, you can actually capture that information. If we create a new written language to transcribe what's being heard, I'm assuming that we would be able to turn that into the training material for the LLM. Already the biggest models claim 50 languages and 97 percent of the world's internet population.
How about the energy piece? Models take a tremendous amount of energy to train, and energy is not generated or distributed equitably.
That is a real concern. I've been talking to people about how setting up data centers and the power grid to support those is becoming an urgent, pressing issue. We're starting to see data centers being set up next to nuclear power plants and potentially having some way of harnessing fission and fusion as a way of maybe doing something in that space.
On the manufacturing side, if you look at Nvidia and the Taiwanese chip manufacturers, two things are happening. They're trying to create chips that are not like the H100, that are actually driving less power consumption, are smaller to operate, and yet their output is similar. I don't know what the progress is, but I know it's a clear objective.
Second is the actual consumption of these large language models themselves. Right now, most LLM owners would tell you that they're probably running less efficiently than they could be. For example, I've built a model myself that I've pumped data into, and every time I ask a question, it has to go back and reread all the material that I've entered. This concept of memory, cache, and inference will create efficiencies inside of the models, so that they're not crunching as much data for every request. Partitioning things off and running them more efficiently at a hyper level will help.
Think about the conversation we’re having now. I don't have to go back through my entire life history to be able to draw on information for this conversation. I only work with information that is nearest to our topic. We used to call it caching, but in AI, going forward, some of these systems will start behaving that way our human brains do with near-term recall and long-term archived memory. Retrieval will look different, and it will affect how LLMs function and consumer resources. The energy savings with these improvements can be massive—but we’re not there yet. It is, however, a huge priority for the frontier models to address.


Comments