Summarize this content to 2000 words in 6 paragraphs in Arabic Stay informed with free updatesSimply sign up to the Artificial intelligence myFT Digest — delivered directly to your inbox.Any company such as OpenAI, heading for a loss of $5bn last year on $3.7bn of revenue, needs a good story to tell to keep the funding flowing. And stories don’t come much more compelling than saying your company is on the cusp of transforming the world and creating a “glorious future” by developing artificial general intelligence.Definitions vary about what AGI means, given that it represents a theoretical rather than a technological threshold. But most AI researchers would say it is the point at which machine intelligence surpasses human intelligence across most cognitive fields. Attaining AGI is the industry’s holy grail and the explicit mission of companies such as OpenAI and Google DeepMind, even though some holdouts still doubt it will ever be achieved.Most predictions of when we might reach AGI have been drawing nearer due to the striking progress in the industry. Even so, Sam Altman, OpenAI’s chief executive, startled many on Monday when he posted on his blog: “We are now confident we know how to build AGI as we have traditionally understood it.” The company, which triggered the latest investment frenzy in AI after launching its ChatGPT chatbot in November 2022, was valued at $150bn in October. ChatGPT now has more than 300mn weekly users.There are several reasons to be sceptical about Altman’s claim that AGI is essentially a solved problem. OpenAI’s most persistent critic, the AI researcher Gary Marcus, was quick off the mark. “We are now confident that we can spin bullshit at unprecedented levels, and get away with it,” Marcus tweeted, parodying Altman’s statement. In a separate post, Marcus repeated his assertion that “there is zero justification for claiming that the current technology has achieved general intelligence”, citing its lack of reasoning power, understanding and reliability.But OpenAI’s extraordinary valuation seemingly assumes that Altman may be right. In his post, he suggested that AGI should be seen more as a process towards achieving superintelligence than an end point. Still, if the threshold ever were crossed, AGI would probably count as the biggest event of the century. Even the sun god of news that is Donald Trump would be eclipsed.Investors reckon that a world in which machines become smarter than humans in most fields would generate phenomenal wealth for their creators. Used wisely, AGI could accelerate scientific discovery and help us become vastly more productive. But super-powerful AI also carries concerns: excessive concentration of corporate power and possibly existential risk.Diverting though these debates may be, they remain theoretical, and from an investment perspective unknowable. But OpenAI suggests that enormous value can still be derived from applying increasingly powerful but narrow AI systems to a widening number of real-world uses. The industry phrase of the year is agentic AI, using digital assistants to achieve specific tasks. Speaking at the CES event in Las Vegas this week, Jensen Huang, chief executive of chip designer Nvidia, defined agentic AI as systems that can “perceive, reason, plan and act”. Agentic AI is certainly one of the hottest draws for venture capital. CB Insights’ State of Venture 2024 report calculated that AI start-ups attracted 37 per cent of the global total of $275bn of VC funding last year, up from 21 per cent in 2023. The fastest-growing areas for investment were AI agents and customer support. “We believe that, in 2025, we may see the first AI agents ‘join the workforce’ and materially change the output of companies”, Altman wrote.Take travel, for example. Once prompted by text or voice, AI agents can book entire business trips: securing the best flights, finding the most convenient hotel, scheduling diary appointments and arranging taxi pick-ups. That methodology applies to a vast array of business functions and it’s a fair bet that an AI start-up somewhere is working out how to automate them. Relying on autonomous AI agents to perform such tasks requires a user to trust the technology. The problem with hallucinations is now well known. One other concern is prompt injection, where a malicious counterparty tricks an AI agent into disclosing confidential information. To build a secure multi-agent economy at scale will require the development of trustworthy infrastructure, which may take some time.The returns from AI will also have to be spectacular to justify the colossal investments being made by the big tech companies and VC firms. How long will impatient investors hold their nerve?john.thornhill@ft.com

شاركها.
© 2025 خليجي 247. جميع الحقوق محفوظة.
Exit mobile version