| | Good morning. An Amnesty International report found that Serbian authorities have been using “advanced phone spyware” to target journalists, activists and other individuals as part of a “covert surveillance campaign.” | — Ian Krietzberg, Editor-in-Chief, The Deep View | In today’s newsletter: | 👀 AI for Good: Eye health 🚨 First stage of UK online safety rules comes into play 👁️🗨️ The Meta future: AI glasses and augmented reality 🌎 AI didn’t have a huge impact on 2024 elections
|
| |
| AI for Good: Eye health | | Source: Unsplash |
| Last year, researchers at Moorfields Eye Hospital and UCL Institute of Ophthalmology developed a medical AI foundation model called RETFound that is capable of both predicting ocular diseases, as well as using retinal images to predict other illnesses. | The details: The model, trained on millions of retinal images, was open-sourced upon its completion, enabling anyone to deploy it. | In the study presenting the model — which was published in Nature — the researchers detailed the model’s performance in diagnosing several ocular diseases (such as glaucoma) that impair vision over time. The model achieved an accuracy rate of .94, with a confidence interval of 95% in detecting eye diseases, meaning it could help improve their early diagnosis. The system also showed promise for the prediction and diagnosis of other diseases — such as heart disease and Parkinson’s — based on retinal image scans.
| Why it matters: The idea is to improve early diagnosis, which improves outcomes. In part, as the researchers noted, such a system could also broaden access to the kind of healthcare that has become cost-prohibitive for many people. |
| |
| | How GitHub Copilot impacted 4,200+ engineers at 200+ companies | | Where does hype end and reality begin with GenAI? | In 2024, Jellyfish introduced the Copilot Dashboard to measure the impact of the most widely adopted GenAI coding tool. Since then, they’ve gathered data from over 4,200 developers at more than 200 companies, creating a representative sample of how engineering organizations are using Copilot and what impact it’s having on production. | Andrew Lau, Jellyfish CEO, presented the findings as part of his keynote at the Engineering Leadership Community (ELC) annual conference. His slide deck is available for download to help engineering and business leaders understand whether they’re getting adequate return on their AI investments. |
| |
| First stage of UK online safety rules come into play | | Source: Unsplash |
| British media and telecommunications watchdog Ofcom brought the U.K.’s sweeping Online Safety Act into force on Monday by publishing its first edition codes of practice for tech firms regarding illegal content online. | The details: This first edition is focused on harmful and illegal content — fraud, child sexual abuse, terror, etc. — on platforms ranging from social media to dating apps, search engines and gaming sites. Tech firms have until March 16 to evaluate the safety of their platforms; from that point on, Ofcom will be able to enforce the Act by levying fines of up to 10% of a company’s global annual revenue. | Ofcom said each company should name a single person accountable for compliance here; CNBC reported that for repeated offenses, these senior people could face jail time. In serious cases, Ofcom said it will apply for a court order to block a site in the U.K. “For too long, sites and apps have been unregulated, unaccountable and unwilling to prioritize people’s safety over profits. That changes from today,” Dame Melanie Dawes, Ofcom’s chief executive, said in a statement. “We’ll be watching the industry closely to ensure firms match up to the strict safety standards set for them under our first codes and guidance, with further requirements to follow swiftly in the first half of next year.”
| Ofcom said this is just the beginning; it plans to issue more guidelines throughout 2025 which will include an exploration of the use of AI to tackle online harms including child sexual abuse material. | The landscape: An interesting element of this involves the ways in which generative AI can and has accelerated some of the illegal and harmful content highlighted above. Nonconsensual deepfakes, for instance, can qualify as sexual abuse material; it wasn’t too long ago that such material spread almost unchecked across X. |
| |
| | | YouTube will now allow creators to opt-in to third-party AI training. Creators will have the option to authorize a number of companies, including Adobe, IBM, Meta, Microsoft, Anthropic, OpenAI and xAI, to train models on their content. OpenAI announced Monday that it is bringing ChatGPT Search to all free users. The company at the same time said it is shipping general improvements to its search product.
| | Why the U.S. government is saying all citizens should use end-to-end encrypted messaging (CNBC). AI is the black mirror (Nautilus). Google debuts new AI video generator Veo 2 claiming better audience scores than Sora (VentureBeat). The global AI race is increasingly multipolar (Semafor). Demand for Starlink in Zimbabwe is overwhelming capacity (Rest of World).
| If you want to get in front of an audience of 200,000+ developers, business leaders and tech enthusiasts, get in touch with us here. | | |
| |
| The Meta future: AI glasses and augmented reality | | Source: Meta |
| Meta on Monday said that two new features — live AI and AI translation — will soon be making their way to the company’s smart Ray-Ban Meta glasses. | The details: It’s not clear yet when the software update, released currently to early access members, will roll out to all users. | The “live AI” feature allows Meta’s generative AI system to “see what you see continuously and converse with you,” kind of like Siri if it was hooked up to an always-on camera. The live translation feature allows for live audio or transcript translation between English, Spanish, French and Italian.
| It’s not clear just how much energy intensity this update adds to the regular functionality of these glasses. | At the same time, Meta CTO Andrew Bosworth wrote in a blog that “2024 was the year AI glasses hit their stride … for many people, glasses are the place where an AI assistant makes the most sense, especially when it’s a multimodal system that can truly understand the world around you.” | “We’re right at the beginning of the S-curve for this entire product category,” he added. |
| |
| AI didn’t have a huge impact on 2024 elections | | Source: Unsplash |
| A major fear going into the elections that took place throughout 2024 involved generative AI, specifically, the role that AI-generated misinformation might have had on political outcomes. The concern here, as we talked about right before the U.S. election, is simple: armed with generative AI, threat actors can easily and quickly create an enormous supply of convincing misinformation, be it deepfakes of political candidates (which happened), or convincing, but fake, iterations of local news alerts, something that could have impacted peoples’ perceived ability to vote. | The unknown impact of proliferating — and convincingly realistic — misinformation was a top concern for a number of states in the lead-up to election day. | What happened: But now, with the world’s elections in the rear-view, researchers are starting to re-examine what that impact actually turned out to be. According to a new report from Princeton computer scientists Arvind Narayanan and Sayash Kapoor, political misinformation — while a real problem — was not an AI problem this year. | The details: In analyzing every (known) use of AI curated by the Wired AI Elections Project, the pair identified three main takeaways: one, that half of AI use wasn’t intentionally deceptive, two, that AI-generated deceptive content would have still been easy to produce without AI, and three, that supply is the wrong way to examine it; we should be looking at demand instead. | They found that 39 of the 78 instances of AI use in the database were not deceptive; in many cases, generative AI was used by campaigns to broaden their reach in some way. Of course, some of the non-deceptive AI-generated content included the kinds of confabulations and hallucinations tied to the architecture of the technology. A more rampant problem involved so-called “cheap fakes,” which present raw video out of context, or apply subtle edits to raw video (slowing it down to make speech appear slurred, for example) to impact a voter’s impression of a candidate.
| According to a database of viral instances of global electoral misinformation by the News Literacy Project, only 6% of total instances were created using AI; some 45%, however, involved those aforementioned tricks of context. | “Increasing the supply of misinformation does not meaningfully change the dynamics of the demand for misinformation since the increased supply is competing for the same eyeballs,” Kapoor and Narayanan wrote. “Moreover, the increased supply of misinformation is likely to be consumed mainly by a small group of partisans who already agree with it and heavily consume misinformation rather than to convince a broader swath of the public.” | The report focused purely on political misinformation, noting — as we’ve talked about often — that other facets of AI-generated fraud and misinformation, which encompass deepfake abuse and harassment, as well as targeted phishing and identity theft, remain a significant problem. | “Thinking of political misinformation as a technological (or AI) problem is appealing because it makes the solution seem tractable. If only we could roll back harmful tech, we could drastically improve the information environment,” the two wrote. “While the goal of improving the information environment is laudable, blaming technology is not a fix.” | | I find it interesting, though not particularly surprising, that generative AI played a relatively small role in the elections this year, though I do not think it was at all a bad thing for people to express extreme concern over this dynamic. Concern leads to vigilance, and vigilance keeps us sharp. | But while the brand of political misinformation they discussed might not have been an AI problem, I would argue a different brand is — if misinformation, as they argued, is in part due to political polarization, and in part due to how people get their information, it’s hard to claim that technology isn’t a culprit. | Yes, people have lost trust in the media, and in institutions. But search and social media algorithms (which leverage AI) don’t value truth, and instead push out content that keeps people on platforms, incentivizing everyone from bad actors to media organizations to employ emotional, suggestive and misleading headlines, all of which fuel a destructive cycle. | There are other things at play here, but we can’t erase the role technology plays in accelerating these elements — if Google Search, for instance, was optimized for factuality, rather than SEO, we would be in a very different situation. | | | Which image is real? | | | | | 🤔 Your thought process: | Selected Image 2 (Left): | |
| |
| 💭 A poll before you go | Thanks for reading today’s edition of The Deep View! | We’ll see you in the next one. | Here’s your view on AI training opt-outs: | 50% of you said training should be opt-in only; only 28% said opt-outs are fine. | Opt-out is fine: | “If the work is public in any way — on social, in a gallery, etc., — I don’t see an issue unless the result is a replica/copy which of course would fall under copyright.”
| Do you use smart glasses? If not, do you think you will? | |
|
|
|