The tyranny of Big Data: Why FAANG's AI approach is failing us and endangering our future
The FAANG-led obsession with big data is not only stifling AI innovation but also creating a dangerous blindspot in AI safety, as highlighted by the recent Paris AI Summit
Special thanks to my friends, Pr. Guan Seng Khoo and Pr. Farhad Bolouri, for the insights.
Today, the dominant narrative in AI centers on "big data"—the notion that more data inevitably leads to better, more intelligent models.
This philosophy, championed by FAANG companies (Facebook, Amazon, Apple, Netflix, Google), has fueled impressive advancements in AI, particularly in areas like natural language processing and image recognition.
However, a growing chorus of critics argues that this obsession with big data is leading us down a dangerous path, one that prioritizes quick profits and centralized control over genuine innovation, societal benefit, and, crucially, AI safety.
My view:
The illusion of "average" intelligence and its safety implications
FAANG companies, driven by their insatiable appetite for user data and their focus on maximizing shareholder value, have built AI empires on the foundation of big data.
Their models are trained on massive datasets scraped from the internet, capturing billions of users' collective behavior and preferences.
While this approach has yielded impressive results in specific domains, it has inherent limitations and significant safety implications.
Bias and unpredictability
Big data often reflects societal biases and inequalities, leading to AI models that perpetuate and amplify these biases.
Furthermore, training models on massive, homogenous datasets can result in "average" intelligence that fails to capture the nuances and diversity of human experience.
This can lead to unpredictable and potentially harmful outcomes in real-world scenarios.
The Black Box problem and lack of control
The complexity of these big data-driven models often makes them opaque and difficult to understand, raising concerns about transparency, accountability, and the potential for unintended consequences.
This lack of explainability hinders our ability to identify and mitigate potential risks, making controlling or correcting harmful behaviors difficult.
Centralized control and data exploitation
The reliance on big data concentrates power in the hands of a few tech giants, raising concerns about data privacy, surveillance, and the potential for these companies to exploit user data for their own benefit, potentially at the expense of user safety and societal well-being.
This concentration of power also limits the ability of independent researchers and developers to contribute to AI safety research and oversight.
Beyond Big Data (in aparté): how Small Data and Vertical AI can drive sustainability and value
For years, the narrative around data has been "bigger is better."
Massive data centers, churning through colossal datasets, have become synonymous with innovation and competitive advantage.
However, this paradigm faces a growing challenge: sustainability.
The energy demands of big data are substantial, contributing significantly to carbon emissions and raising concerns about environmental responsibility.
For today's business leaders and board members, the question is no longer just how much data we can process but how efficiently we can extract value from it.
The answer may lie in a shift towards small data and vertical AI.
The traditional big data approach, while powerful, comes at a cost. The sheer scale of these operations necessitates sprawling data centers, consuming vast amounts of energy for processing, storage, and cooling.
This translates directly into a large carbon footprint, conflicting with increasing stakeholder expectations for environmentally conscious practices.
Furthermore, the expense associated with these massive infrastructures can strain budgets and limit accessibility for smaller organizations.
A compelling alternative is emerging: small data and vertical AI. This approach leverages targeted, smaller datasets relevant to specific business needs. Instead of a one-size-fits-all approach, vertical AI tailors solutions to particular industries or applications, maximizing efficiency and minimizing computational overhead. This shift offers several key advantages:
Sustainability: Reduced data volumes translate to lower energy consumption, shrinking the carbon footprint of data operations. Smaller, modular data centers become feasible, potentially located closer to renewable energy sources, further enhancing sustainability.
Cost-effectiveness: Smaller data centers and less complex processing requirements lead to significant cost savings in infrastructure, maintenance, and energy consumption. This frees up resources for strategic initiatives and innovation.
Enhanced performance: Processing data closer to the source (edge computing) reduces latency and improves the performance of AI applications. This is particularly crucial for real-time decision-making and time-sensitive operations.
Improved data privacy: Processing data locally enhances data privacy and security, minimizing the risks associated with transmitting sensitive information to centralized servers. This aligns with growing regulatory scrutiny and customer expectations regarding data protection.
Agility and innovation: Small data and vertical AI empower organizations to be more agile and responsive to changing market conditions. Targeted solutions can be developed and deployed faster, driving innovation and creating new revenue streams.
This isn't to suggest that big data is obsolete. It still plays a vital role in certain applications.
However, small data and vertical AI offer a compelling path forward for many organizations, particularly those prioritizing sustainability and cost-effectiveness.
As decision leaders and board members, we must challenge the conventional wisdom of "bigger is always better" regarding data.
By embracing a more strategic and targeted approach, we can unlock the full potential of AI while minimizing our environmental impact and maximizing long-term value.
The future of data processing lies in intelligent efficiency, not just sheer volume. It's time to explore the power of small data and vertical AI to drive both business success and a sustainable future.
The case for small data, vertical AI, and enhanced safety
In contrast to the FAANG approach, a growing movement advocates using "small data" and vertical AI, particularly in specialized domains like healthcare, finance, and education.
This approach, emphasizing context, specificity, and data quality, offers a more promising path towards safer and more beneficial AI.
Context and safety
Small data emphasizes the importance of context and specificity, recognizing that different domains require tailored AI solutions sensitive to their respective fields' nuances and complexities.
This context awareness can enhance safety by enabling the development of AI systems better aligned with human values and domain-specific ethical considerations.
Data quality and robustness
Instead of relying on massive, generic datasets, small data prioritizes the quality and relevance of data, ensuring that AI models are trained on carefully curated and domain-specific information.
This focus on data quality can improve the robustness and reliability of AI systems, reducing the risk of unpredictable or harmful behavior.
Explainability and trust
Small data-driven models are often more transparent and explainable, fostering trust and accountability in critical applications where human oversight is essential.
Explainable AI is crucial for ensuring that AI systems are used responsibly and that humans can understand and validate their decisions.
Decentralized development and collaboration
Vertical AI encourages a more decentralized and collaborative approach to AI development.
It empowers domain experts and smaller organizations to contribute to creating AI solutions that meet their specific needs.
This decentralized approach can foster greater diversity in AI development, potentially leading to more robust and ethically sound AI systems.
The Paris AI Summit: a realistic call for global action?
The recent Paris AI Summit underscored the escalating global apprehension surrounding Artificial Intelligence's safety and ethical ramifications.
High-ranking government officials and subject matter experts worldwide convened to deliberate on the pressing necessity for international collaboration and regulatory frameworks.
“The best way is to identify talented researchers, fund their research for a few years, and get out of their way. Large programs with top-down organization have a history of failure.”—Yann LeCun, Vice President and Chief AI Scientist at Meta.
Please check more in the References section below.
These frameworks are essential to guarantee the responsible development and deployment of AI technologies.
The summit served as a platform to address concerns about potential risks and unintended consequences, emphasizing the importance of proactive measures to mitigate these concerns and ensure that AI serves humanity's best interests.
International collaboration
A concerted and collaborative global effort is paramount to effectively addressing the multifaceted challenges inherent in AI safety. This necessitates nations transcending borders and political differences to actively share knowledge actively, fostering a collective intelligence that can preempt and mitigate potential risks.
Furthermore, establishing universally recognized standards and protocols can provide a framework for responsible AI development and deployment, ensuring that ethical considerations remain at the forefront of technological advancement. Additionally, creating robust oversight mechanisms, coupled with clear lines of accountability, will be instrumental in maintaining public trust and safeguarding against potential misuse or unintended consequences.
This global endeavor must be characterized by transparency, inclusivity, and a shared commitment to harnessing AI's transformative power for humanity's collective benefit.
Ethical frameworks and regulations
The development and implementation of AI technologies necessitate the establishment of strong ethical frameworks and regulatory measures.
These frameworks should be designed to steer AI's evolution in a direction that respects human values, upholds individual rights, and fosters overall societal well-being.
This involves addressing potential biases in AI algorithms, ensuring transparency and accountability in AI decision-making processes, and safeguarding against unintended consequences or malicious use of AI.
Furthermore, these ethical guidelines should promote fairness, equity, and inclusivity in the access to and benefits of AI technologies while also considering the potential impact of AI on employment, privacy, and social structures.
International collaboration and ongoing dialogue among stakeholders, including governments, industry leaders, researchers, and civil society, are essential to develop comprehensive and adaptable ethical frameworks that can effectively guide the responsible and beneficial use of AI in the years to come.
Public engagement and education
To foster a sense of collective responsibility and informed decision-making around AI, the public must be equipped with a comprehensive understanding of AI's multifaceted implications - spanning its potential benefits, inherent risks, ethical considerations, and societal impact.
This foundational knowledge will empower individuals to engage critically with AI technologies, ensuring their development and deployment align with human values and contribute positively to collective well-being.
This public education initiative should encompass a wide array of accessible resources and platforms for open dialogue, encouraging individuals from all walks of life to actively participate in shaping the future of AI.
By cultivating a society that is both AI-literate and AI-aware, we can collectively navigate the complexities of the AI era and ensure that these transformative technologies are harnessed responsibly for the betterment of humanity.
The urgent need for a paradigm shift
The FAANG-driven obsession with big data is not only stifling innovation and hindering AI's true potential to benefit society but also creating significant risks to AI safety.
As we explored in "Beyond Big Data," this reliance on massive datasets also has a significant environmental cost, with energy-intensive data centers contributing to a growing carbon footprint.
By prioritizing quick profits and centralized control, these companies perpetuate a system that favors "average" intelligence, perpetuates biases, erodes user trust, and increases the potential for harm.
It's time for a paradigm shift in AI development that prioritizes safety, ethics, and societal well-being alongside technological advancement.
This shift must include a move beyond the "bigger is better" mentality.
By embracing small data, vertical AI, and collaborative innovation, we can unlock AI's true potential to address pressing global challenges, empower individuals, and foster a more equitable and sustainable future.
These targeted approaches can reduce energy consumption, improve performance, enhance privacy, and foster greater agility.
The future of AI lies not in the hands of a few tech giants but in the collective wisdom and ingenuity of a diverse and empowered global community committed to building safe, responsible, and beneficial AI systems.
Failing to make this shift could have dire consequences. The unchecked pursuit of "bigger is better" AI could lead to a future where powerful technologies are deployed without adequate safeguards, potentially jeopardizing our collective future.
The unchecked pursuit of "bigger is better" AI could lead to a future where powerful technologies are deployed without adequate safeguards, potentially jeopardizing our collective future.
The Paris AI Summit serves as a stark reminder that the challenges of AI safety are global in nature and require a coordinated international response.
By embracing the principles of collaboration, ethical development, and public engagement, and by exploring and implementing more sustainable approaches like small data and vertical AI, we can navigate the complex landscape of AI and ensure that this transformative technology is used to build a better future for all.
What do you think?
References
China's ex-UK ambassador clashes with 'AI godfather' on panel— by Zoe Kleinman More on the BBC.
“We agreed with much of the leaders’ declaration and continue to work closely with our international partners. This is reflected in our signing of agreements on sustainability and cybersecurity today at the Paris AI Action summit,” the spokesperson said. “However, we felt the declaration didn’t provide enough practical clarity on global governance, nor sufficiently address harder questions around national security and the challenge AI poses to it.”—US and UK refuse to sign Paris summit declaration on ‘inclusive’ AI.
More on the Guardian.The event will be used as the platform to launch a "public interest" partnership called “Current AI” with an initial €387 ($400) million investment. The initiative aims to raise $2.5 billion over the next five years and will involve governments, businesses and philanthropic groups that will provide open-source access to databases, software and other tools for “trusted” AI actors, according to Macron’s office.—World leaders and tech giants converge in Paris for AI summit
More on Euronews.Demis Hassabis, CEO of DeepMind, known for AlphaGo and other AI breakthroughs, discusses artificial intelligence and the impact of DeepSeek on the broader industry, productivity, and regulation in the US and the European Union with Bloomberg Technology.
More on YouTube.Alphabet CEO Sundar Pichai told attendees of the AI Action Summit in Paris on Monday that the technology heralded a "golden age of innovation" and that "the biggest risk was missing out." At an event where countries and industry have focused more on deploying AI than on reining it in, Pichai called for ecosystems of AI innovation and adoption like one he said was growing in France.
More on YouTube.
Microsoft CEO Satya Nadella: “AI simultaneously reduces the floor & raises the ceiling for all of us” at Davos, January 2025.
More on YouTube.