Addressing Rapid Change Panel: AI Rana el Kaliuby, Manish Goyal, Lin Qiao, Will Keooffel, Yair Adato
As we hear from more people about the future of AI, we gain compelling insights into how companies should approach new initiatives and projects.
I thought it would be helpful to look back at some of the key tips we've all heard over the past year and consider how they apply to business.
First, here is some liability-related advice that experts often give to businesses. I heard this in multiple lectures over the spring and summer:
To list it in bullet points, it looks like this:
Fact-checking both content and end-user processes
Protect data and ensure proper data governance
· Effectively monitor AI work.
For the third point, you need to monitor both input and output, not just one or the other. Proper I/O monitoring can make a big difference.
First, while many companies monitor content, they may not think about proactively checking the user processes that AI implements, such as chatbot output. Chatbot output is more dynamic and harder to monitor. Everything must be targeted and precise. In other words, AI cannot make errors or have problematic hallucinations.
A quote from a recent IIA conference on the use of AI:
“We no longer talk about humans as the receivers of GenAI, we talk about agents, which means many agents playing different roles. The agents will self-organize and coordinate with each other. So to complete a task, you'll have multiple agents talking to each other, and latency becomes even more important.” – Lin Qiao
“There's a huge opportunity in this revolution – picks and shovels. But there's also a huge opportunity for domain experts to understand the impact of AI in their fields. In biotech, we have great legal experts, great healthcare experts. There's a lot of interest in AI, not just in drug discovery, but in many other areas of that world.” – Will Coffel
“When you develop something, when you bring it to market, when you do design reviews, when you think about factors like privacy, security, and scale, you also need to think about responsible AI, which fundamentally means being accountable.” – Yair Adat
Below are some broader, general guidelines for businesses in any industry that want to build a strong foundation with AI tools:
Build your own model or use other models strategically. In many cases, our experts suggest that companies should be able to build their own LLM to handle the processes they come up with. The downside of using other models is that you don't have your own control. However, this is not an absolute rule in all cases. In other cases, startups can be more agile and scale faster by using other models. So this is actually a tip depending on your project and intentions.
Own your own data – This may apply more to end users than to businesses, but it is imperative that businesses own the data they use to run their AI. If they don’t, they are at risk in terms of liability, losing out competitively, or both.
Work with partners where possible – Many of our colleagues have suggested that companies should develop strategic partnerships rather than going it alone in their AI production, which will undoubtedly have an impact on scaling and time to market.
Prioritize talent – With a surge in hiring for data scientists, AI prompt engineers, and other critical positions, the talent pool is limited across industries.
In addition to these four top-level goals, companies need to be mindful of security and privacy concerns and understand the regulatory landscape surrounding AI.
It's quite a thought-provoking story, but to me it illustrates many of the main points people have been taking away from classes and conferences as we enter the AI era. The first half of 2024 has seen incredible progress, but we're only halfway through the year. Over Christmas and the New Year, we'll see many more new products and services suddenly and unexpectedly hit the market. Stay tuned!