| | | Good morning. The OpenAI leadership saga continues … but frankly, it was way more Game of Thrones-y back in 2023. | Oh well. | — Ian Krietzberg, Editor-in-Chief, The Deep View | In today’s newsletter: | 🏥 AI for Good: Sustainable buildings 🏛️ US Govt. agency launches AI tool for internal use 👁️🗨️ Executive shuffle: Sam Altman’s shifting focus 🚨 AI powers massive escalation of cyber attacks
|
| |
| AI for Good: Sustainable buildings |  | Source: Unsplash |
| “The buildings where we work, shop and live,” according to Inger Andersen, the executive director of the United Nations Environmental Program, “account for a third of global emissions and a third of global waste.” | But a new report from the UNEP found that, for the first time since 2020, building emissions have stopped rising. It’s a step in the right direction, but, according to Andersen, “we must do more and do it faster.” | The details: And in that effort to do more and, to borrow Andersen’s phraseology, “do it faster,” the algorithms that lie beneath the wide umbrella of “artificial intelligence” are being increasingly leveraged to both build and operate buildings more sustainably. | A recent report from the European Commission claimed that AI is playing a “crucial role in enhancing energy efficiency.” This takes a few different forms; when it comes to building operation, the integration of machine learning algorithms to manage Heating, Ventilation and Air Conditioning (HVAC) systems is leading to passive energy gains. These kinds of ‘smart,’ algorithmic operations can expand to water consumption and broader electricity use, enabling far less waste of resources
| But models are also being used in the design phase of building development, as a means of sustainable optimization through simulations and digital twins. Models are being used to process certain parameters, such as location and time-and-material constraints, to produce the most energy-efficient designs under the given circumstances. |
| |
| | Design and ship your dream site with Framer. Zero code, maximum speed. | | Just publish it with Framer. Beautiful sites without code — easily. Join thousands of designers and teams using Framer to turn ideas into high-performing websites, fast. The internet is your canvas. | Framer is simple to learn, and easy to master. | Check out the Framer Fundamentals Course to build on your existing design skills to help you quickly go live in Framer. Perfect for designers transitioning from Figma or Sketch. | Get 25% off for 3 months with code THEDEEPVIEW |
| |
| US Govt. agency launches AI tool for internal use |  | Source: GSA |
| The U.S. General Services Administration (GSA) officially announced the launch of an internal generative AI tool last week. Very little about the tool is known, though the GSA said in a release that it is currently seeking feedback from staff to refine the tool, and is working toward offering it as a “shared service” to other federal agencies in the near future. | The details: The tool was developed in-house, according to the statement, a means of addressing privacy and security concerns around third-party models. | “This launch is just the beginning,” Zach Whitman, GSA’s Chief AI Officer and Chief Data Officer (CDO), said. | While the statement claims that the tool was developed in-house, Wired reported in February that Elon Musk’s DOGE had been working on developing a custom chatbot for the GSA, later reporting in March that DOGE’s chatbot had been deployed to 1,500 federal workers. It’s unclear if this “custom” chatbot is just a wrapper around one of the models developed by xAI, Musk’s AI company. Other specifics — the size of the model, its training data, how it was trained, how it was validated, whether it’s run in the cloud or on the edge, how its data is secured, whether the GSA built it with any specific applications in mind and whether it will be used to replace workers or reduce staffing requirements — remain unknown.
| It is also unclear how the agency is communicating the reliability, fairness and trustworthiness of the chatbot to its staff; internal memos seen by Wired offered prompting tips and warned users not to “type or paste federal nonpublic information.” | A GSA spokesperson told me that the tool was developed to “create efficiencies, increase productivity and support staff in their daily work.” They added that it has been in development for more than 18 months, though did not answer questions concerning Musk’s role in the creation of the bot, any specifics around its anticipated applications or any model-specific details. | Algorithmic bias remains a key issue in the field of AI, one that has been causing harm for more than a decade. As machine learning evolved into the generative AI we deal with today, the datasets expanded and issues of bias persisted, since bias is ingrained in the training data. And because models always answer a query, and never warn users if the given system lacks the necessary quantity or equity of data to actually answer a query properly, there is a significant risk of people allowing their decision-making to be swayed by quietly biased systems. |
| |
| | Accelerate AI to value with confidence. | | Accurate inference challenges aren’t just inconvenient—they pose serious risks to healthcare outcomes, financial performance, safety, and regulatory compliance. | CloudFactory specializes in eliminating these risks by solving complex data challenges and delivering trusted, reliable AI models at scale. We partner closely with your team, meeting you exactly where you are in your AI journey, to ensure your AI solutions drive confident, measurable results. | Solve complex data challenges that impact inference quality Ensure accuracy and reliability in high-stakes scenarios Accelerate your AI initiatives, from strategy to deployment
| 👉 Discover how CloudFactory empowers AI excellence | 👉 Read from our blog | 👉 Connect with our team today |
| |
| | | | | Why handing over total control to AI agents would be a huge mistake (MIT Tech Review). An mRNA cancer vaccine may offer long-term protection (Science News). Google is rolling out Gemini’s real-time AI video features (The Verge). Quantum computing startup PsiQuantum raising at least $750 million (Reuters). Trump pledges auto, pharma tariffs in ‘near future,’ sowing more trade confusion (CNBC).
|
| |
| Executive shuffle: Sam Altman’s shifting focus |  | Source: OpenAI |
| The news: OpenAI on Monday announced a bit of an executive shuffle, with COO Brad Lightcap’s role expanding significantly to head up the company’s global business and day-to-day operations. | CEO Sam Altman, meanwhile, will be shifting his focus “more to the technical side,” according to Bloomberg, which first reported the news. | Altman will be focusing on the research and development of OpenAI’s products. The company is not planning on hiring a CTO to replace Mira Murati, who left to start her own startup several months ago, according to Bloomberg. As part of the shuffle, Julia Villagra and Mark Chen have been promoted to OpenAI’s Chief People Officer and Chief Research Officer, respectively.
| “OpenAI has grown a lot,” Altman wrote in a blog. “We remain focused on the same core — pursuing frontier AI research that accelerates human progress — but we now also deliver products used by hundreds of millions of people.” | All that real-world usage, according to Altman, is a boon to OpenAI’s research. | This move marks the latest shift in a company that’s been evolving from day one; founded as a non-profit alongside Elon Musk a decade ago, OpenAI has spent the past several years inching ever closer to becoming a full-fledged for-profit company, a transition marred by lawsuits and that brief ouster (and reinstatement) of Altman in 2023. | The startup is currently in talks to raise $40 billion in funding at a $300 billion valuation. |
| |
| AI powers massive escalation of cyber attacks |  | Source: Unsplash |
| Though machine learning has been around for a long time, now, the proliferation of generative AI — sparked by OpenAI’s 2022 release of ChatGPT — massively and immediately changed the cybersecurity threat landscape. | The now-famous chatbot went live in November; by January, one of the earlier known instances of GenAI-enabled voice fraud — featuring a group of kidnappers demanding a million-dollar ransom for a daughter who was, unbeknownst to her mom, safe in bed — had occurred. | And while regulation has lagged, the tech has simply grown, becoming more advanced and even more accessible. | iProov, an identity verification firm, recently published its Threat Intelligence Report for 2025, a report that identified a 300% increase in face swap attacks, a 783% increase in injection attacks against mobile applications and a 2,665% increase in injection attacks against virtual cameras. “Attack-as-a-service” communities, meanwhile, have grown massively.
| iProov referred to this environment as an “exponential threat landscape,” saying that “advances in synthetic media tools, combined with thriving Crime-as-a-Service marketplaces, have created a democratized environment where complex attacks can be launched by actors with minimal technical expertise.” | This has become something of a theme. | Recent research from Menlo Security identified a 140% increase in browser-based phishing attacks and a 130% increase in zero-hour phishing attacks in 2024, compared to 2023 numbers. This boost, according to the report, was driven by generative AI, a tool that attackers are using to bypass traditional security measures and target specific users through their browsers. | The majority of the fraud Menlo identified centered around impersonation websites designed to trick people into entering highly personal information by posing as a generative tool of some kind. “In addition to cybercriminals stealing sensitive and personal information, the returned document is typically a PDF where malware can hide out and be delivered,” Menlo said. | And in its threat intelligence report for the latter half of 2024, cybersecurity firm Mimecast found that the spread of AI chatbots has “lowered the bar for cybercrime.” | “Social engineering attacks maintain high success rates, evolving through the integration of automated AI technologies,” Mimecase wrote. “Advanced persistent threats now leverage sophisticated deepfake technologies and AI-generated content for targeted attacks, significantly complicating traditional detection and prevention mechanisms.” | The firm added that, while AI is helping cybersecurity analysts out, the tech is very clearly benefiting attackers as well. | In its recently released State of Human Risk report for 2025, Mimecast found that 43% of companies “have seen an increase in internal threats or data leaks initiated by compromised, careless or negligent employees.” Even as 95% of organizations are using AI tools to defend against both internal and external cyber threats, 55% do not yet have specific strategies in place for dealing with the cybersecurity threats posed by generative AI.
| Two-thirds of the businesses the firm spoke to believe “it is inevitable or likely that their organization will suffer a negative business impact from an attack linked to an email or collaboration tool in 2025.” | Underscoring all of this, the U.S. National Institute of Standards and Technology just published its official guidance on mitigating cyberattacks against generative AI. | Jeff Schumann, a Mimecast VP of AI strategy, told me that AI “is only empowering our attackers to be more intelligent, more thoughtful, more creative, more accurate, even, in their approaches to getting into the environments.” | In the midst of this growing threat, he said that the strongest defensive strategy should be built around the common denominator that ties this all together, and here, that is the human element. Schumann explained that, almost all the time, there is a human either being taken advantage of, or accidentally creating a vulnerability, that is then exploited; one of Mimecast’s focuses, then, is on managing that internal risk as a means of shielding against external threats. | The company has an AI-enabled security solution called Insider that’s “phenomenal at managing the data exposure risk outside the walls across a variety of different vehicles in the world today,” according to Schumann. | “So when I say vehicles, I mean things like GenAI solutions, like dropping data into ChatGPT or DeepSeek, so having not just visibility into that, but the ability to do preventative controls and know when it's happening, and perhaps prevent it from happening,” he said. He acknowledged that the solution — like all AI applications — is “only as good as the data,” something that raises security and privacy questions, especially given the breadth and scope of a solution like this. Mimecast’s answer is transparency, traceability, privacy, flexible customer controls and security by design.
| To that end, Mimecast has secured an ISO 42001 certification, an international standard that affirms the existence of proper governance controls related to AI systems. | “Security by design is a day-one initiative,” Schumann said. “And so if you have recently become an AI company, and suddenly are just focused on feature functionality to try to compete with your competition, odds are you probably didn't focus on this from day one.” | He added that the logistics here involve “more than just encryption. It starts with the architecture up front, and we actually isolate a lot of things from each other.” | Keeping datasets anonymized and stored in separate areas protects against potential attacks; as part of this, Mimecast deploys internal AI systems to identify and automatically prevent the processing of personal and confidential information. | And beyond that architecture, Schumann said that trust is vital, and explainability is the key lever toward earning it. | “When it comes to the use of AI, I think it’s being able to show the traceability,” he said. “In the applications themselves, can you see how a model calculated an answer? Yes, you can, because we architected it that way.” | | | Which image is real? | | | | | 🤔 Your thought process: | Selected Image 2 (Left): | | Selected Image 1 (Right): | |
| |
| 💭 A poll before you go | Thanks for reading today’s edition of The Deep View! | We’ll see you in the next one. | Here’s your view on the proposed Apple class action: | 35% wouldn’t join it; you kept your old iPhone. But a quarter of you would — you upgraded because of those ads. | Didn’t upgrade: | “Given the built-in hype surrounding LLMs, and the uncertainty factors noted, the ‘noise to signal’ ratio is still far too high or the utility too low to rely on it solely for anything mission critical. Even so, it is thoroughly fascinating.”
| Something else: | | Has your company suffered an AI-related cyber attack that you know of? | | If you want to get in front of an audience of 450,000+ developers, business leaders and tech enthusiasts, get in touch with us here. |
|
|
|