The Unintended Consequences of Generative AI

The water that carries a boat can also sink it, so goes the ancient Chinese saying. Generative AI is today’s proverbial water that can both propel humankind to greater heights and – if we’re not careful – overwhelm us with its power. One of the ways the technology can inadvertently cause harm is by worsening the climate crisis by way of energy consumption, say INSEAD professors in this INSEAD Explains video series. Another is the erosion and even loss of problem-solving skills and creativity. Then there’s also the societal and legal implications of questionable and inaccurate content.     1. Worsening climate changePhanish Puranam, Professor of StrategyGenAI presents two key challenges for businesses. First, the massive energy consumption required to train models like ChatGPT exacerbates the climate crisis. Unless engineers can develop more efficient hardware, this technology may hinder rather than help us achieve environmental goals. Second, over-reliance on AI may cause us to neglect, even lose, valuable skills like problem-solving and creativity. Businesses must strategically determine which skills to retain and which to outsource to AI, balancing efficiency with the preservation of uniquely human capabilities. These are not just about economics, but also identity in an AI-driven world.2. Weakening higher-level reasoningHyunjin Kim, Assistant Professor of StrategyWhile AI can automate tasks and enhance decision-making, early evidence suggests it may also impair higher-level reasoning skills. For instance, in financial firms, AI-powered predictions may improve investment decisions, but analysts’ ability to explain and justify those decisions might decline.This poses a challenge for businesses, as the ability to reason and communicate effectively is vital for stakeholder engagement. The key is to strategically integrate AI into workflows, ensuring that the technology enhances human capabilities rather than replace them. This may involve redesigning processes to emphasise human reasoning and explanation, even as AI improves decision-making. 3. Producing harmful content and leaking sensitive informationTheos Evgenious, Professor of Decision Sciences and Technology ManagementGenAI’s ability to create vast amounts of questionable or inaccurate content can undermine trust in information sources and complicate efforts to combat misinformation. Harmful GenAI outputs, including hate speech, illegal content or polarising information can also have social and legal repercussions. Additionally, AI-generated content may infringe intellectual property rights or compromise individual privacy.Information leaks pose another concern. When users input proprietary code or sensitive data into AI models, there’s a risk of unintended disclosure. Until we have effective control mechanisms, businesses must carefully consider the information they share with these systems, and prioritise the development and deployment of ethical AI.

This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. Accept Read More

Privacy & Cookies Policy