The Signal - ChatGPT, one year on
ChatGPT, one year on2024 may rein in the chatbot that opened the floodgates of artificial intelligenceGood morning! The OpenAI drama last month threw everyone, including investors, in a headspin. Which begs the question: if there’s chaos *now*, what will the future bode for the world’s most famous AI company? Sam Altman & Co. may be back, but today’s story foresees that 2024 will be a grounding year for AI as regulators take stock and countries go to elections in an environment of increasing misinformation. Also in this edition: our picks of the week’s best longreads. If you enjoy reading us, why not give us a follow at @thesignaldotco on Twitter, Instagram, and Threads. Tim Gorichanaz ChatGPT was launched on Nov. 30, 2022, ushering in what many have called artificial intelligence’s breakout year. Within days of its release, ChatGPT went viral. Screenshots of conversations snowballed across social media, and the use of ChatGPT skyrocketed to an extent that seems to have surprised even its maker, OpenAI. By January, ChatGPT was seeing 13 million unique visitors each day, setting a record for the fastest-growing user base of a consumer application. Throughout this breakout year, ChatGPT has revealed the power of a good interface and the perils of hype, and it has sown the seeds of a new set of human behaviors. As a researcher who studies technology and human information behavior, I find that ChatGPT’s influence in society comes as much from how people view and use it as the technology itself. Generative AI systems like ChatGPT are becoming pervasive. Since ChatGPT’s release, some mention of AI has seemed obligatory in presentations, conversations and articles. Today, OpenAI claims 100 million people use ChatGPT every week. Besides people interacting with ChatGPT at home, employees at all levels up to the C-suite in businesses are using the AI chatbot. In tech, generative AI is being called the biggest platform since the iPhone, which debuted in 2007. All the major players are making AI bets, and venture funding in AI startups is booming. Along the way, ChatGPT has raised numerous concerns, such as its implications for disinformation, fraud, intellectual property issues and discrimination. In my world of higher education, much of the discussion has surrounded cheating, which has become a focus of my own research this year. Lessons from ChatGPT’s first yearThe success of ChatGPT speaks foremost to the power of a good interface. AI has already been part of countless everyday products for well over a decade, from Spotify and Netflix to Facebook and Google Maps. The first version of GPT, the AI model that powers ChatGPT, dates back to 2018. And even OpenAI’s other products, such as DALL-E, did not make the waves that ChatGPT did immediately upon its release. It was the chat-based interface that set off AI’s breakout year. There is something uniquely beguiling about chat. Humans are endowed with language, and conversation is a primary way people interact with each other and infer intelligence. A chat-based interface is a natural mode for interaction and a way for people to experience the “intelligence” of an AI system. The phenomenal success of ChatGPT shows again that user interfaces drive widespread adoption of technology, from the Macintosh to web browsers and the iPhone. Design makes the difference. At the same time, one of the technology’s principal strengths – generating convincing language – makes it well suited for producing false or misleading information. ChatGPT and other generative AI systems make it easier for criminals and propagandists to prey on human vulnerabilities. The potential of the technology to boost fraud and misinformation is one of the key rationales for regulating AI. Amid the real promises and perils of generative AI, the technology has also provided another case study in the power of hype. This year has brought no shortage of articles on how AI is going to transform every aspect of society and how the proliferation of the technology is inevitable. ChatGPT is not the first technology to be hyped as “the next big thing,” but it is perhaps unique in simultaneously being hyped as an existential risk. Numerous tech titans and even some AI researchers have warned about the risk of superintelligent AI systems emerging and wiping out humanity, though I believe that these fears are far-fetched. The media environment favors hype, and the current venture funding climate further fuels AI hype in particular. Playing to people’s hopes and fears is a recipe for anxiety with none of the ingredients for wise decision making. What the future may holdThe AI floodgates opened in 2023, but the next year may bring a slowdown. AI development is likely to meet technical limitations and encounter infrastructural hurdles such as chip manufacturing and server capacity. Simultaneously, AI regulation is likely to be on the way. This slowdown should give space for norms in human behavior to form, both in terms of etiquette, as in when and where using ChatGPT is socially acceptable, and effectiveness, like when and where ChatGPT is most useful. ChatGPT and other generative AI systems will settle into people’s workflows, allowing workers to accomplish some tasks faster and with fewer errors. In the same way that people learned “to google” for information, humans will need to learn new practices for working with generative AI tools. But the outlook for 2024 isn’t completely rosy. It is shaping up to be a historic year for elections around the world, and AI-generated content will almost certainly be used to influence public opinion and stoke division. Meta may have banned the use of generative AI in political advertising, but this isn’t likely to stop ChatGPT and similar tools from being used to create and spread false or misleading content. Political misinformation spread across social media in 2016 as well as in 2020, and it is virtually certain that generative AI will be used to continue those efforts in 2024. Even outside social media, conversations with ChatGPT and similar products can be sources of misinformation on their own. As a result, another lesson that everyone – users of ChatGPT or not – will have to learn in the blockbuster technology’s second year is to be vigilant when it comes to digital media of all kinds. Tim Gorichanaz is Assistant Teaching Professor of Information Science, Drexel University. This article is republished from https://theconversation.com under a Creative Commons licence. Read the original article at https://theconversation.com/chatgpt-turns-1-ai-chatbots-success-says-as-much-about-humans-as-technology-218704 ICYMIWhen in Rome…: Roads paved with good intentions seldom hold strong. When Foxconn announced its plans to make the latest iPhone 15 in India, the move was heralded as a global shift in the world’s manufacturing ecosystem. Since then, the company has been caught in a classic case of being a cultural misfit. Chinese supervisors are frustrated with their Indian counterparts' slow speed, multiple tea breaks, and their need for too many holidays. They’re also puzzled to find Indians not willing to work longer for bonuses. Clearly, China’s infamous work culture of intense competition, cheekily nicknamed ‘neijuan’ or involuted, is finding no takers here. Despite that, efficiencies are increasing. Workers are warming to the idea of Foxconn’s relentless production pursuits and the challenges they entail. To know more about this fascinating world of Foxconn in India, read this insightful piece in Rest of World. First there was sportswashing, now there’s…: …greenwashing. West Asia’s oil-rich states are snapping up sports clubs and leagues with sovereign fund (read: oil) money, and they—specifically, the UAE—seems to be applying that playbook to climate change too. The appointment of Adnoc (Abu Dhabi National Oil Company) chief Sultan al-Jaber as COP28 president was contentious as it is. The Financial Times now reports that al-Jaber is pretty much using the climate conference for dealmaking. And it’s not just energy transition projects across Asia, Africa, and South America. The UAE is the eighth-largest oil producer and one of the world’s largest emitters of hydrocarbons, yet al-Jaber hasn’t specified any deadline for a phase-down. Could Adnoc setting aside $150 billion for a five-year expansion plan have anything to do with it? Your guess is as good as ours. Cloak & dagger in research: In his stellar work on ecological collapse, The Nutmeg’s Curse, writer Amitav Ghosh describes how the responsibility of climate impact was consciously shifted to the individual by sustained advertising efforts. Energy company British Petroleum (BP) spent over $100 million annually on campaigns that also deeply embedded the perception that climate change was not a present reality but a future threat. The energy industry was way ahead in understanding what was in store for the world and did its best to shift responsibility elsewhere. An investigation by Europe’s clean transport campaign group, Transport and Environment, suggests that Big Oil executives helped set up a research group on air pollution. Concawe or the Conservation of Clean Air and Water in Europe was set up by the industry, which allegedly masqueraded as an advocacy organisation but in reality tried to combat studies and opinions unfavourable to the industry’s products with its own research papers. It specifically tried to discredit research that linked benzene pollution and cancer. The man behind the machines: Artificial intelligence (AI) as we know it today wouldn’t have existed without Nvidia. And Nvidia wouldn’t exist without CEO Jensen Huang. In one of the best longform profiles in months, The New Yorker walks us through the evolution of the company that came into being as NVision (a name that was chucked soon after Huang and Co. learnt it was the name of a toilet paper manufacturer). Huang, a Taiwanese immigrant, was schooled in a religious reform institution where kids literally fought for their lives, married his high school sweetheart, and started his Silicon Valley career as a microchip designer. In 2013, he had the foresight to know that AI would be the next big thing. That foresight has turned Nvidia into a company with a market cap of more than $1 trillion. You’d think Huang would be a flagbearer for autonomous-everything, but he isn’t. If anything, he’s bullish on the “omniverse”. Also consider this longread a crash course on GPUs and CUDA. A portrait of contradictions: Ammon Bundy and his family are willing to lay their lives down for freedom. And so are the thousands of other members of Bundy’s People’s Rights Network, a loose collection of right-leaning and libertarian groups and people in the US, characterised by their deep distrust of the state, science, and the ‘system’. But Bundy, whose father Cliven provided the spark for the movement in the ‘70s by refusing to yield grazing land to the government, is not an all-American hero. Instead, this profile in The Atlantic finds a man of deep contradictions. He is an ordinary suburban dad whose followers storm a hospital threatening violence at his command. He is bombastic about fighting the system and refusing to yield to its rules but a stint in solitary confinement breaks his resolve, pushing him to post bail when he’s arrested a second time. Bundy’s politics is also contradictory—he doesn’t like Donald Trump for instance—and his years of leading standoffs and doxxing people he deems instruments of the state have left him financially ruined. Read the story to understand the origins of US’ culture wars and domestic terrorism groups, as well as the mystery of how Ammon Bundy suddenly disappeared, to the relief of his victims. How China made advanced chips: In August 2023, when Chinese electronics company Huawei unveiled the sleek Mate 60 smartphone series, it sent alarm bells ringing in the West, particularly the US. It was as if all the restrictions imposed on China to prevent it from accessing high technology were futile. The Mate 60 is powered by Charlotte, an advanced 7nm chip. Until then it was believed that China did not have the capability to make such an advanced semiconductor. After the US imposed crippling sanctions, Huawei took a risky wager on the Semiconductor Manufacturing International Corporation (SMIC), which claimed it could make advanced chips using equipment it already had. It would be expensive but the job could be done. The collaboration produced Charlotte or Kirin 9000S, whose performance is only marginally lower than Qualcomm’s semiconductors. Financial Times pieced together the story of how Beijing did what appeared to be impossible or at least not possible in two years. Its next target: taking on Nvidia with AI chips. The Signal is free today. But if you enjoyed this post, you can tell The Signal that their writing is valuable by pledging a future subscription. You won't be charged unless they enable payments. |
Older messages
Goodbye, and thank you!
Friday, December 1, 2023
It is with great disappointment that I have to inform you that this is the last edition of The Playbook that I am writing. My youngest cat who is four months old has been diagnosed with a serious
Bad things come in threes
Friday, December 1, 2023
Also in today's edition: Henry The K is dead; Iger's U-turn; HCL's wrapper ambitions; Ma's memo
School’s over for Huawei
Wednesday, November 29, 2023
Also in today's edition: Irdai wants answers; Climate summit 🫱🏾🫲🏽 oil deals; Tesla vs. Sweden; The $86 billion elephant in OpenAI's room
A new challenger takes charge
Tuesday, November 28, 2023
Also in today's edition: Hardik Pandya's homecoming; Indian medicine for pharma manufacturers; Gaming setback for ByteDance; Disney's unfulfilled Wish
Moonshot hits pay dirt
Monday, November 27, 2023
Also in today's edition: What recession?; Diamonds are not forever; Same wars, another day; Statutory warning: do not post fakes
You Might Also Like
14-day trial versus the 30-day trial
Monday, January 13, 2025
I love that you're part of my network. Let's make 2025 epic!! I appreciate you :) Today's hack 14-day trial versus the 30-day trial When testing a 14-day free trial versus the original 30-
🦄 Spotify unwrapped
Sunday, January 12, 2025
The year-in-review strategy and how others are adopting it. 🎼
✊🏽 Old marketer shakes fist at cloud
Sunday, January 12, 2025
Why Hootsuite's CEO embraces Gen Z View in browser hey-Jul-17-2024-03-58-50-7396-PM I don't have to look any further than my own group chats to find stereotypical complaints about Gen Z in the
Marketing Weekly #214
Sunday, January 12, 2025
How to Accurately Track and Measure Lower Funnel Metrics • A Peek Inside My Content Plan • How to Build an Offer Your Audience Can't Refuse • How a Cup of Tea Turned Me into a Loyal Customer • The
Q4 2024 Roared Back for Venture Capital
Sunday, January 12, 2025
And the top SaaStr news of the week To view this email as a web page, click here This edition of the SaaStr Weekly is sponsored by Stripe Crunchbase: 50% of VC Capital Went to SF Bay Last Year, Q4
The Profile: The man behind OpenAI & the founder revolutionizing warfare
Sunday, January 12, 2025
This edition of The Profile features Sam Altman, Palmer Luckey, Adrien Brody, and more. ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏
Sunday Thinking ― 1.12.25
Sunday, January 12, 2025
"The amazing thing about life is that the beauty you see in anything is actually a reflection of the beauty in you."
China's VC future hangs in the balance
Sunday, January 12, 2025
Plus: Our top news hits of 2024, Indian VC fundraising & more Read online | Don't want to receive these emails? Manage your subscription. Log in The Weekend Pitch January 12, 2025 Presented by
Brain Food: A Series of Plateaus
Sunday, January 12, 2025
Intensity is common, consistency is rare. ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏
Recruiting Brainfood - Iss 431
Sunday, January 12, 2025
WEF Future of Jobs 2025, Meta ends DEI programmes, Impact of AI on Upwork project demand and a blacklisting site for 'toxic hires'... ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏