Welcome to insideAI News’s “Heard on the Street” round-up column. In this feature, we highlight thought-leadership commentaries from members of the AI industry ecosystem. We cover the trends of the day with compelling perspectives that can provide important insights to give you a competitive advantage in the marketplace.
California’s recent AI regulation bill (SB 1047). Commentary by Jim Liddle, Chief Innovation Officer Data Intelligence and AI at Nasuni
“The bill is an ambitious effort to regulate frontier AI models, aiming to mitigate risks associated with powerful AI systems but potentially concentrating power within large tech companies. The compliance burdens and potential penalties might stifle innovation from smaller players and open-source developers. Although the draft includes some provisions for open-source AI, these seem insufficient to counterbalance the overall regulatory weight. The board’s composition—representatives from open-source, industry, and academia—appears balanced initially, but having only one member each from these critical stakeholder groups may lack sufficient diversity of perspectives. Additionally, the FMD’s authority to annually update the definition of a covered model could significantly alter which AI models fall under regulation, and the draft legislation does not provide clear mechanisms for appealing or challenging FMD decisions. A more balanced framework that accommodates various scales of AI development, especially open-source initiatives, would be preferable. Hopefully, future revisions will address these concerns.”
California’s recent AI regulation bill (SB 1047). Commentary by Manasi Vartak, Chief AI Architect at Cloudera
“This regulation is premature as we don’t fully understand what harms these models can cause yet and how to best put guardrails in place to prevent harms (these models have really been around only for a 2-3 years). These LLMs are foundational models, i.e., general-purpose models. The same models can be used to teach students math as the model can be used to potentially hack a bank system. In essence, they are the Swiss-army knives of models. A Swiss Army knife can be used to cut fruit or cause harm, but we don’t ban Swiss Army knives. Similarly, it is the use case where these models are applied that is more important than the model itself.
As a step in the right direction, the watered down version of the bill passed today removed criminal liability and removed smaller, fine-tuned models from covered models (
Amended Californian AI legislation advances Big Tech interests over public safety. Commentary by Bruna de Castro e Silva, AI Governance Specialist at Saidot
“By limiting liability to cases of ‘real harm’ or ‘imminent danger,’ Silicon Valley risks creating an environment where innovation and corporate interests take precedence over public welfare and the protection of human rights. The original intent of the bill was to establish a proactive, risk-based framework, as first introduced in the EU AI Act, to ensure that AI products are safe before being released to the public. However, this revised bill encourages a reactive, ex-post approach that addresses safety only after damage has occurred. AI risks and harms have already been extensively documented and AI incident databases, such as the OECD AI Incidents Monitor, provide concrete evidence of the real harms that can arise from AI.
Adopting an ex-post legislative approach also disregards the magnitude and the complexity of remedying AI-related harms once they occur. Research has shown that the scale, unpredictability and opacity of AI systems present multifaceted challenges to remedying AI-related harms.
These amendments not only advance the corporate interests of Big Tech companies but also undermine the fundamental principle of AI governance as a practice that must be carried out continuously throughout the product lifecycle.
Companies like Anthropic have played a significant role in watering down these regulations, leveraging their influence to shift the focus away from stringent pre-release testing and oversight. AI systems, especially those with the potential for widespread harm, require rigorous pre-release testing and transparent oversight to prevent future damage.
AI safety can’t be an afterthought; it must be embedded in the development process from the outset. Comprehensive testing is crucial to identify and mitigate risks early in the development process, ensuring that AI products, whether large or small, do not inadvertently cause harm once deployed. Without thorough evaluation, the consequences could put the public at significant risk, from unintended biases to critical system failures.”
The best way for supply chains to use AI. Commentary by Supplyframe CMO Richard Barnett
“The best way to use AI when it comes to supply chain is to help with the automation of repetitive tasks and processes across supply chain functions, along with the realization of new forms of strategic decision making and collaboration. In a world where we have access to billions of data points at any time, the ability to automate and streamline decision making with the help of AI enables suppliers to cut down response times, improve customer satisfaction and increase profit margins.”
White House open-source AI decision is “catastrophic.” Commentary by Paul Kirchoff, CEO and Founder of EPX Global
“Because of this unique technological condition, zero oversight or industry standards is a risk to business, security and society. The push to open source has many benefits and the desire to decentralize power from private companies must replace the natural responsibilities private companies bear of support, enforcement and safety — even more important with AI.
This is not a situation like WiFi where industry standards make it easier for the technology to be adopted. This is a situation where the lack of oversight on a technology that is more powerful than all the firearms pulled together, could be catastrophic.”
Meta’s move to open-source AI. Commentary by Paul Kirchoff, CEO and Founder of EPX Global
“A move by any industry to use more open-source code is a positive sign, and it is a necessary alternative to the grip of power that can come with a private company’s success. However, to drive adoption with anything open source requires a support model, a marketing machine that competes with private budgets. In the early days of Dell, we were, of course, experimenting and looking to become first to market deploying Linux – but it was only after Red Hat pushed marketing and support into the world that Linux moved from an efficient niche to a major computing platform. We must also not forget that private brands bring more than just a single technology, but rather their reputation, ancillary tools, partners, and more. With AI, it is not just performance that developers will care about – these other areas are in the weighted-average decision factors too.”
SEC’s revised AI timeline. Commentary by Mike Whitmire, CEO, FloQast
“Although we have not yet seen a ruling on how financial advisors and brokers can utilize artificial intelligence, the likelihood of future governance from the SEC is high and could have a significant impact on business in the United States. Any regulatory measures will significantly elevate the strategic importance of compliance and risk management teams within the organization. Rather than waiting for governance, organizations should optimize and adequately resource these teams to navigate future regulatory shifts, especially around AI, ensure workflow transparency and fairness, and protect stakeholders’ and employees’ interests.”
Global IT Outage. Commentary by SandboxAQ CEO Jack Hidary
“There has been an increasing trend to use AI to help developers write software code. This can indeed boost developer productivity, but where we need more help from AI is in improving the quality assurance of code. This major global outage that brought thousands of flights and businesses to a standstill reminds us that humans are not very good at catching errors in thousands of lines of code – this is where AI can help a lot. In particular, we need AI trained to look for the interdependence of new software updates with the existing stack of software.”
Google’s AI translation momentum. Commentary by Olga Beregovaya, VP of AI at Smartling
“The use of the PaLM family of models to power Google translation engine is the convergence we have been waiting for—NMT models meeting the power of Gen AI. PaLM models do not just train themselves out of nothing for these 110 languages, many of them being long-tail. Where LLM provides a definitive win is their ability to extrapolate between adjacent language families, even learning from very sparse datasets.
This is an important watershed moment for Google Translate. Using LLMs indicates that LLMs provide equal and likely even better quality than neural technology for these languages. It also implies that these models may inform the Google Adaptive Translation offering and perhaps even be superseded by Google Gemini models in the future”.
Old Gas, New Ferrari. Commentary by Kevin Campbell, CEO, Syniti
“You wouldn’t put old gas in a new Ferrari – so why do businesses fuel their organizations with poor quality data? According to a recent study from HFS, only 30% of cloud migrations are successful. The main culprit? Poor quality data. Simply put, if you have garbage data, you’ll get garbage results – no matter how shiny and new the infrastructure may be.
A data first strategy – prioritizing clean, high-quality data as a business imperative – is the only way organizations can truly harness the latest technological advancements hitting the marketplace.
Take generative AI, for example. In a Forbes Advisor study, 97% of surveyed business owners think ChatGPT will benefit their businesses. More than 30% of those businesses intend to create website content using ChatGPT, and 44% of them intend to translate that content in multiple languages. Businesses from all industries are considering how they might use this technology to get the competitive upper hand. But organizations need clarity about the purpose and potential benefits of implementing generative AI before they begin. If not fueled by quality data, generative AI quickly breaks down on the side of the road.
Quality data underpins any business process. It has the power to disrupt markets and break new boundaries – but only when it’s trusted and understood. While data quality’s importance is understood by most organizations today, achieving it is another story. It’s often perceived as being too lengthy and complicated a process.
To start improving data quality, the best thing a company can do is to focus on one area and just start working on it. If you’re not sure where to begin, start with a business process that you know is problematic – something causing waste, rework, irritation, or revenue loss. Then, determine the essential data elements used in that business process and begin wrapping those elements in the rules and policies that evaluate if the data is fit-for-purpose or causing problems.
In short, data quality isn’t a “one and done” endeavor – it’s about continuous maintenance and regular effort and investment.”
AI’s Achilles heel: Data quality is the line between innovation and liability. Commentary by Steve Smith, U.S. Chief Operating Officer at Esker
“AI is only as good as the data it holds, and with the average company managing around 162.9TB of data, much can be outdated, biased, or inaccurate altogether. That’s why it’s so important to make sure companies are using the right AI tools trained with the latest, most accurate and relevant data for their use case … AI hallucinations can significantly impact the accuracy and reliability of automated decision-making systems in commercial applications and often have a chain reaction. For example, when outputs are inaccurate, this can lead to not only poor decision-making but also decreased trust amongst users. This, in turn, leads to operational inefficiencies, requiring more time for human oversight and correction to ensure processes continue to flow seamlessly.”
Judge dismisses developer’s claim against OpenAI. Commentary by OpenUK CEO, Amanda Brock
“We are at a pivotal point in AI where policy makers, legislators and courts are asked to make decisions about usage of content including code on open source software licenses. Decisions about the use of content are critical and Models must have access to data to train. At the same time, there must be respect for any licenses and whilst the US judge has not upheld the copyright claim saying that the requirement of identity was not proven, he does not say that this might not be proven in another case – the contract claims have been upheld. This is a major decision for open source software licensing and the US’s approach may well have influence in many other jurisdictions.”
How synthetic data can make AI safer. Commentary by Henry Martinez, global head of solutions and consulting at 3Pillar Global
“When the pharmaceutical industry introduced synthetic drugs for the first time, they conducted rigorous tests to ensure their safety and efficacy. We need to apply the same rigor to synthetic data. Synthetic data can effectively test AI models’ performance and intelligence. With proper oversight, it can also be used to train AI models and guard against data breaches and privacy violations. The key is deep data domain knowledge from both human and AI assets, clear definitions of the types of tasks AI models should be trusted with, and continuous monitoring for data skew as part of a synthetic data augmentation plan. When handled with the proper care, synthetic data, coupled with human innovation, can identify new opportunities and rhythms in data, enhancing monetization strategies for specific clusters and differentiating our data refineries.”
AI in Customer Engagement. Commentary by Ken Yanhs, CMO, Zoovu
“Every CMO will tell you that AI is impacting marketing strategy, staffing and campaign tactics. The one area with the most impact right now is the use of AI to engage customers. While AI in ecommerce marketing campaigns has been useful for years, the rise of large language models (LLMs) and generative AI is changing how CMOs approach customer engagement.
For example, generative AI provides short cuts for writing copy, product pages and chat scripts, but customers don’t fully trust the information they’re getting. In fact, a recent survey by Forrester Consulting and Zoovu asked over 400 executives at companies with more than $200M in B2B ecommerce revenue about the use of AI in their marketing. The top two results were for implementing chat bots and A/B testing, at 35% and 32%, respectively.
Going forward, CMOs, especially in ecommerce, will need to ensure the quality of the information provided by generative AI. The companies that can match the speed and conversational aspects of generative AI with accurate information will see the biggest returns on their campaigns, accruing trust and loyalty in their brands.”
AI’s Coming Energy Crisis. Commentary by Kirk Offel, CEO of Overwatch
“The energy crisis is hitting our industry hard, putting us at the center of the conflict between phasing out fossil fuels and the rise of AI, which relies heavily on an electrical grid powered by coal and natural gas. It’s ironic that tech giants, who championed climate initiatives, now face the reality of their energy-intensive innovations. With AI expected to drive a 160% increase in data center power demand by 2030, wind and solar won’t suffice. Without a shift to nuclear power, we’ll have to rely on natural gas. This isn’t just a tech or data center industry issue; it affects everyone. The 5th Industrial Revolution is upon us, and we must lead the way.”
Is the OpenAI Open Letter as Alarming as it Seems? Commentary by Raj Koneru, founder and CEO of Kore.ai
“The open letter from OpenAI’s employees might seem alarming at first glance, but it actually points to a fundamental truth: AI should not be monopolized by a select group of companies; instead, it should be as accessible as the internet. While the concerns about AI’s rapid advancement and lack of oversight are valid, it’s impractical for governments to monitor everything. This means enterprises should lead the way in ensuring responsible AI usage.
The AI model race won’t end anytime soon, and no single model will dominate every application. This diversity is great for businesses and organizations of all sizes, providing them with a menu of options to choose from based on their unique needs. The open-source community will continue to innovate, sometimes matching the capabilities of major players like OpenAI, Google, or Microsoft. This ongoing competition is healthy, and ensures businesses can always select the best-in-class models for their needs.”
Sign up for the free insideAI News newsletter.
Join us on Twitter: https://twitter.com/InsideBigData1
Join us on LinkedIn: https://www.linkedin.com/company/insideainews/
Join us on Facebook: https://www.facebook.com/insideAINEWSNOW
Leave a Reply