Book Process Report

Competing Works

Task: In order to avoid creating a redundant or uninformed book, use AI to consider our thesis and search for recently published works either already argue for or against that point.

(NB: We encountered a book on our own, by Gary Marcus, so we used that as a starting point in the prompt.)

Prompt: Gary Marcus wrote a book in 2024 titled "Taming Silicon Valley: How We Can Ensure That AI Works for Us." What is the thesis and main arguments of this book? Give me a sketch of what suggests Gary Marcus a reputable authority to make this argument. What other books by reputable authors have been published since December 2023 that make similar arguments? What reputable books making counter arguments have been published in that same time frame? Give a capsule summary of each of these books, pro and con, and a capsule bio of their authors.

Search Result

Gary Marcus's "Taming Silicon Valley": The Thesis of AI's Oligarchy Problem

The Core Argument: Big Tech Has Captured AI Development

Gary Marcus's 2024 book "Taming Silicon Valley: How We Can Ensure That AI Works for Us" presents a stark warning about the concentration of AI power in the hands of a few major technology companies. Marcus argues that we are witnessing the emergence of an "AI oligarchy" where corporations like OpenAI, Google, and Meta control the development and deployment of AI systems without adequate oversight or regulation[1][2][3][4].

  • The book identifies four major problems with current AI development: AI systems are unreliable and unpredictable, AI development is dominated by a small number of powerful companies, there is insufficient government regulation of AI, and AI systems are being deployed widely despite these fundamental flaws[2].
  • Marcus warns that Big Tech companies are "playing both the public and the government" by pushing forward flawed AI products while simultaneously shaping policy to their advantage[2].
  • The central thesis is that AI technology is "too important to cede to corporate control" and that current trajectories could lead to threats to democracy, privacy, and human safety[5].
  • Marcus describes this as a "perfect storm of corporate irresponsibility, widespread deployment, lack of adequate regulation and inherent unreliability"[3].
  • He argues that without intervention, AI could enable unprecedented surveillance, manipulation, and control by tech companies[4].

Marcus's Policy Prescriptions: Eight Pillars for AI Governance

Marcus provides concrete policy recommendations organized around eight key areas that he believes should form the foundation of coherent AI governance. These proposals represent his vision for how democratic societies can regain control over AI development and ensure it serves the public interest rather than corporate profits[1][2].

  • Data rights legislation that gives individuals control over how their personal information is used to train AI systems[1].
  • Layered AI oversight with multiple levels of review and accountability for AI systems based on their potential impact[1].
  • Meaningful tax reform to ensure AI companies contribute fairly to society and don't externalize costs[1].
  • Transparency requirements that force companies to disclose how their AI systems work and what data they use[2].
  • Democratic accountability measures that give citizens and their representatives real power to shape AI development[2][4].
  • Marcus emphasizes that ordinary citizens must "push for what is so desperately needed" rather than waiting for government action[1].

Gary Marcus as a Reputable Authority on AI

Academic and Research Credentials

Gary Marcus brings substantial academic credentials to his critique of contemporary AI development. As Professor Emeritus of Psychology and Neural Science at New York University, he has spent decades studying the intersection of cognitive science, neuroscience, and artificial intelligence[6][7].

  • Marcus received his PhD in cognitive science from MIT, where he was mentored by Steven Pinker[6].
  • His early research focused on children's language development and acquisition of grammatical morphology, providing deep insights into how human intelligence actually works[6][8].
  • He has published extensively in leading journals including Science and Nature, establishing his reputation as a serious researcher[9].
  • Marcus began coding at age 8 and wrote a Latin-English translator at age 15, demonstrating early technical aptitude[2].
  • His academic work has consistently challenged prevailing assumptions about AI, often anticipating current limitations decades in advance[7].
  • He is described as "perhaps the youngest Professor Emeritus at NYU," indicating exceptional academic achievement[9].

Entrepreneurial Experience in AI

Marcus's credibility extends beyond academia through his direct experience building and commercializing AI technologies. This combination of theoretical knowledge and practical implementation experience strengthens his authority to critique the AI industry[6][10][7].

  • In 2014, he co-founded Geometric Intelligence, a machine learning company that was acquired by Uber in 2016 for an estimated $400 million[6][10].
  • He briefly served as director of Uber's AI efforts before leaving in 2017, giving him insider knowledge of how major tech companies approach AI development[6].
  • In 2019, he co-founded Robust.AI with Rodney Brooks, focusing on developing AI systems for robotics applications[6].
  • His entrepreneurial experience provides him with firsthand understanding of the business pressures and incentives that drive AI development[10].
  • Marcus has testified before the US Senate alongside OpenAI's Sam Altman, establishing his role as a recognized expert in AI policy debates[2][3].
  • His background spans both the technical and business aspects of AI, making him uniquely qualified to assess the industry's direction[7].

Books Making Similar Arguments Since December 2023

"The Coming Wave" by Mustafa Suleyman (2024)

Mustafa Suleyman's "The Coming Wave: Technology, Power, and the Twenty-first Century's Greatest Dilemma" presents a nuanced perspective on AI's transformative potential while acknowledging significant risks and the need for careful management of emerging technologies[11][12][13].

  • Suleyman, co-founder of DeepMind and current CEO of Microsoft's Consumer AI unit, argues that AI represents a "step change in human capability" with both tremendous promise and serious dangers[11][12].
  • The book introduces the concept of "the containment problem" - the challenge of maintaining human control over increasingly powerful AI systems[11][13].
  • While more optimistic than Marcus about AI's potential benefits, Suleyman warns that AI could threaten the nation-state itself if not properly governed[11].
  • He emphasizes the need for democratic institutions to shape AI development rather than leaving it entirely to market forces[12].
  • The book provides historical context by comparing AI to previous technological revolutions, showing how societies have navigated similar transitions[12][14].
  • Bill Gates calls it "my favorite book on AI" because it offers "a clear-eyed view of both the extraordinary opportunities and genuine risks ahead"[13].

"Power and Progress" by Daron Acemoglu and Simon Johnson (2024)

Daron Acemoglu and Simon Johnson's "Power and Progress: Our Thousand-Year Struggle Over Technology and Prosperity" provides historical context for understanding how technological change can either benefit society broadly or concentrate power among elites[15][16][17].

  • The book challenges "techno-optimism" by showing how throughout history, technological advances have often benefited the powerful while marginalizing ordinary people[15].
  • Acemoglu, who won the 2024 Nobel Prize in Economics, argues that "there is nothing automatic about new technologies bringing widespread prosperity"[15][16].
  • The authors trace how technological choices have been shaped by "what powerful people want and believe" rather than serving broad public interests[15].
  • They argue that AI could easily serve as "an engine of further wealth concentration" unless democratic institutions actively shape its development[15].
  • The book provides a framework for understanding why AI governance requires active intervention rather than trusting market forces[16].
  • The authors call for using AI to "create useful and empowering tools" rather than systems that automate work and increase political passivity[15].

"Nexus" by Yuval Noah Harari (2024)

Yuval Noah Harari's "Nexus: A Brief History of Information Networks from the Stone Age to AI" examines how information systems have shaped human civilization and warns about the dangers of AI-driven information networks[18][19][20].

  • Harari argues that humanity's problems stem not from human nature but from flawed information systems that spread "fictions, fantasies, and mass delusions"[19].
  • The book warns that AI could create "such a powerful network of delusions that it could prevent future generations from even attempting to expose its lies and fictions"[19].
  • Harari introduces the concept of "alien intelligence" rather than "artificial intelligence," emphasizing that AI systems think in fundamentally different ways from humans[18].
  • The book explores how AI threatens democratic conversation by making it impossible to distinguish between human and AI-generated content[19].
  • Harari warns that AI could enable unprecedented manipulation of public opinion and political processes[18].
  • The book provides a framework for understanding AI as part of a longer history of information technologies that have shaped human societies[18].

"Supremacy" by Parmy Olson (2024)

Parmy Olson's "Supremacy: AI, ChatGPT, and the Race That Will Change the World" provides an insider's account of the competition between major AI companies and the risks of leaving AI development to corporate interests[21][22][23].

  • The book focuses on the rivalry between OpenAI and Google's DeepMind, showing how corporate competition shapes AI development[21].
  • Olson, a Bloomberg technology journalist, uses exclusive access to industry sources to reveal the "exploitation of the greatest invention in human history"[21].
  • The book warns about "the profit-driven spread of flawed and biased technology into industries, education, media and more"[21].
  • Olson examines how AI companies prioritize rapid deployment over safety and reliability[22].
  • The book won the 2024 Financial Times Business Book of the Year Award for its compelling account of AI's development[22].
  • It provides detailed reporting on the personalities and business dynamics driving AI development[21].

Books Making Counter-Arguments Since December 2023

"Co-Intelligence" by Ethan Mollick (2024)

Ethan Mollick's "Co-Intelligence: Living and Working with AI" presents a more optimistic view of human-AI collaboration, arguing that AI can enhance rather than replace human capabilities when used thoughtfully[24][25][26].

  • Mollick, a Wharton professor, argues that AI should be viewed as a collaborative partner rather than a threat to human agency[24].
  • The book provides practical guidance on how to work effectively with AI systems while maintaining human creativity and judgment[24][25].
  • Mollick emphasizes "thinking with LLMs" rather than using them as substitutes for human thought[27].
  • The book presents AI as a tool that can free humans to focus on more creative and meaningful work[24].
  • Mollick advocates for embracing AI's potential while being realistic about its limitations[25].
  • The book is based on Mollick's extensive experimentation with AI tools in educational and business settings[25].

"Some Future Day" by Marc Beckman (2025)

Marc Beckman's "Some Future Day: How AI Is Going to Change Everything" offers an optimistic perspective on AI's potential to enhance human life and strengthen social bonds[28].

  • Beckman, a professor and entrepreneur, argues that AI will free up valuable time and energy for more creative and meaningful pursuits[28].
  • The book presents AI as a tool that will strengthen family bonds and improve home life[28].
  • Beckman shows how AI can enhance rather than replace human connections and community engagement[28].
  • The book provides specific steps readers can take to make AI work for them rather than against them[28].
  • Beckman argues that AI will create new opportunities for growth, innovation, and collaboration[28].
  • The book emphasizes AI's potential to improve education, healthcare, and community life[28].

"Genesis" by Eric Schmidt and Henry Kissinger (2024)

Eric Schmidt and Henry Kissinger's "Genesis: Artificial Intelligence, Hope, and the Human Spirit" presents a cautiously optimistic view of AI's potential while acknowledging the need for careful governance[29].

  • The book looks ahead 50 years to consider AI's potential to reshape human life in positive ways[29].
  • Schmidt and Kissinger argue that AI could help address global challenges if properly directed[29].
  • The book manages to maintain optimism while addressing difficult questions about AI's societal implications[29].
  • The authors emphasize that achieving positive outcomes requires deliberate human action and good governance[29].
  • The book provides a framework for thinking about what governments should do to protect future generations[29].
  • It balances recognition of AI's transformative potential with realistic assessment of governance challenges[29].

"The Age of AI and Our Human Future" by Kissinger, Schmidt, and Huttenlocher (2021/2024)

This collaborative work by Henry Kissinger, Eric Schmidt, and Daniel Huttenlocher offers a measured perspective on AI's potential to enhance human capabilities while requiring careful stewardship[30].

  • The book argues that AI can augment human intelligence rather than replace it when properly developed[30].
  • The authors emphasize the need for maintaining human agency and values in AI development[30].
  • The book provides a framework for understanding AI as a tool that can solve complex global challenges[30].
  • It advocates for international cooperation and governance frameworks to ensure AI serves humanity's interests[30].
  • The authors bring diverse perspectives from technology, diplomacy, and academia to address AI's implications[30].
  • The book presents AI as an opportunity to enhance human civilization if properly managed[30].

Author Capsule Biographies

Critics and Skeptics

Gary Marcus is Professor Emeritus of Psychology and Neural Science at NYU and serial entrepreneur who co-founded Geometric Intelligence (acquired by Uber) and Robust.AI. With a PhD from MIT and extensive academic publications, he has become a leading voice warning about AI's current limitations and the need for democratic governance of AI development[6][7].

Mustafa Suleyman co-founded DeepMind (acquired by Google) and currently serves as CEO of Microsoft's Consumer AI unit. His insider perspective on AI development informs his argument that AI represents both tremendous opportunity and existential risk requiring careful containment[11][12].

Daron Acemoglu, 2024 Nobel Prize winner in Economics and MIT professor, brings decades of research on institutions and technological change to his analysis of how AI could either enhance or undermine shared prosperity[15][17].

Yuval Noah Harari, historian and author of "Sapiens," applies his expertise in human civilization to examine how AI-driven information networks could fundamentally alter human society and democratic governance[18][19].

Parmy Olson, Bloomberg technology journalist with 13 years of experience covering the tech industry, uses exclusive access to industry sources to reveal the corporate dynamics driving AI development[21][22].

Optimists and Collaborationists

Ethan Mollick, Wharton professor and entrepreneur, draws on extensive experimentation with AI tools in business and educational settings to argue for productive human-AI collaboration[24][25].

Marc Beckman, professor and entrepreneur, combines academic research with practical experience to present AI as a tool for enhancing human creativity and strengthening social bonds[28].

Eric Schmidt, former Google CEO and current chair of the Defense Innovation Board, brings Silicon Valley leadership experience to his analysis of AI's potential benefits and governance challenges[29].

Henry Kissinger, former Secretary of State and foreign policy expert, applies diplomatic and strategic thinking to questions of AI governance and international cooperation[29].

The debate over AI's future thus features authorities with deep technical expertise, business experience, and policy knowledge on both sides, reflecting the genuine complexity of managing this transformative technology.