The Sequence Chat: Consensys's Lex Sokolin on Generative Art and Philosophical Principles of Generative AI
Was this email forwarded to you? Sign up here The Sequence Chat: Consensys's Lex Sokolin on Generative Art and Philosophical Principles of Generative AIA conversation about the history, current state and foundations of generative art.👤 Quick bio
Thanks for having me on here. In terms of my background, sometimes it feels like a pendulum swing between the rational and the creative. I am equally drawn to aesthetics and systems, sometimes at the same time. and also at https://www.lexsokolin.com/artist-statement 🛠 ML Work
I go back to the concept of the Uncanny Valley. We have had an enormous volume of CGI and various rendering of images over the last two decades. Artists have been trying to make things photo-realistic in movies and video games, but (1) the images were imperfect and (2) the skill to create them was prohibitive. In fact, the more people chased perfection, the more off-putting the images had felt. I think a similar thing can be said of robot conversation – early attempts felt like talking to a chattering metallic machine with a rubber mask on. You could see the gears, and the fact that those gears attempted to look human was genuinely unnerving and creepy.
I used to think of AI as a counterpart to a human brain. Once we have mapped an entire human brain, in an Accelerando fashion, then we can copy/paste that intelligence and scale up our processing. But it feels more like AI has been recreating human senses at the scale of the population, of humanity. We see how neural networks used to ingest some local data set about cats and that was sufficient to train that network to see cats. Now, the entire container of digitized human knowledge is pumped into a mystery box, which structures that information into abstractions we cannot touch or understand.
Generative Art used to mean that you use a programming language like Processing to discover mathematical algorithms which deterministically design beautiful patterns. Those things might be fractals, or constructivist abstractions, or some other balanced recursive aesthetic. The key was in being very precise with specifying rules through programming.
I remember seeing a generative AI paper in 2014 or so, and thinking that it was impossible to commercialize. Now, there is a new step forward every week. Video game worlds are rendered in Minecraft blocks, and then styled and made alive through diffusion models. Videos are in their beginning stages of being consumable. Music and NPC text are coming around. All these primitives will add up to supporting a spontaneous, personalized metaverse experience, regardless of Zuckerberg’s early failures. Each one of us can and will carry a secret world, and visual effects are an unbounded part of this future.
There are two dimensions I am worried about here: (1) the closing / opening of the model itself, and whether the manufacters of the AI engine try to close down access to its use and re-use, and (2) the ability of people to own and transact around the outputs of the models in a way that advantages human dignity. 💥 Miscellaneous – a set of rapid-fire questions
I am excited to see generative AI meaningfully adopted in media and entertainment, rather than as a brainstorming tool. Once picture-perfect AI is available to all on cheap compute, I would expect more “art” oriented usages of the AI to emerge. In particular, ideas around glitching and deconstructing AI imagery is very interesting to me.
I personally use Midjourney, because it is optimized for consumers and is fast and easy. I think different models are likely to succeed for mainstream users versus pro-sumers or professional users.
I think we will end up with an oligopoly of AI conversational interfaces, which become deeply functional like operating systems. The OpenAI plug-in strategy is very powerful, and could kick off a race in terms of economic competition that largely benefits a single AI owner. I hope that the open source community is able to fork many of these benefits, and then create decentralized ownership and governance models that allow people to maintain their dignity (i.e., rights) as well as manageable financial models.
Art is separate from rendering and illustration. The creative commons has been a boon for the Internet and digital media, and I hope that the tooling we are building now remains largely in that commons. However, artists need economic models for their craft. The answer to that question comes in the form of digital ownership, with the earliest examples being NFTs on computational blockchains. This is the only answer I have seen as to how artists crowdfund from their communities by selling authentic art, even when infinite copies and remixes float around in the world. Perhaps we can tie in a royalty with a Web3 mechanism that allows for art to be integrated into an AI learning set, but frankly this feels like a weak mechanism to a mammoth problem.
NFTs prove authenticity and provenance, and allow for real commerce to occur on digital objects. Generative art can be special in that it is manufactured with the participation of the purchaser / minter, drawing the consumer into the creative process. I like the idea of having “authentic” mints being a valuable experience with a tangible price. The limitations are in that adoption of the particular market structure and shape of NFTs is very low in the general population. We need to move from novelty to standard, in the way that plastic records have been discarded in favor of digital music files. You’re on the free list for TheSequence Scope and TheSequence Chat. For the full experience, become a paying subscriber to TheSequence Edge. Trusted by thousands of subscribers from the leading AI labs and universities. |
Older messages
The Sequence Chat: Salesforce Research's Junnan Li on Multimodal Generative AI
Wednesday, April 19, 2023
One of the creators of the famous BLIP-2 model shares his insights about the current state of multimodal generative AI.
Inside LangChain: The Super Popular LLM Framework You Need to Know About
Wednesday, April 19, 2023
LangChain is part of a generation of new frameworks that are integrating LLMs into mainstream software development lifecycles.
📌 Webinar: Improving search relevance with ML monitoring
Wednesday, April 19, 2023
Let's take a dive into ML systems for ranking and search relevance and what it means to monitor them for quality, edge cases, and corrupt data
Big vs. Small, Open Source vs. API Based, the Philosophical Frictions of Foundation Models
Wednesday, April 19, 2023
Sundays, The Sequence Scope brings a summary of the most important research papers, technology releases and VC funding deals in the artificial intelligence space.
📝 Guest Post: How to Enhance the Usefulness of Large Language Models*
Wednesday, April 19, 2023
In this guest post, Filip Haltmayer, a Software Engineer at Zilliz, explains how LangChain and Milvus can enhance the usefulness of Large Language Models (LLMs) by allowing for the storage and
You Might Also Like
Daily Coding Problem: Problem #1618 [Easy]
Sunday, November 24, 2024
Daily Coding Problem Good morning! Here's your coding interview problem for today. This problem was asked by Zillow. Let's define a "sevenish" number to be one which is either a power
PD#602 How Netflix Built Self-Healing System to Survive Concurrency Bug
Sunday, November 24, 2024
CPUs were dying, the bug was temporarily un-fixable, and they had no viable path forward
RD#602 What are React Portals?
Sunday, November 24, 2024
A powerful feature that allows rendering components outside their parent component's DOM hierarchy
C#533 What's new in C# 13
Sunday, November 24, 2024
Params collections support, a new Lock type and others
⚙️ Smaller but deeper: Writer’s secret weapon to better AI
Sunday, November 24, 2024
November 24, 2024 | Read Online Ian Krietzberg Good morning. I sat down recently with Waseem Alshikh, the co-founder and CTO of enterprise AI firm Writer. Writer recently made waves with the release of
Sunday Digest | Featuring 'How Often People Go to the Doctor, by Country' 📊
Sunday, November 24, 2024
Every visualization published this week, in one place. Nov 24, 2024 | View Online | Subscribe | VC+ | Download Our App Hello, welcome to your Sunday Digest. This week we visualized the GDP per capita
Android Weekly #650 🤖
Sunday, November 24, 2024
View in web browser 650 November 24th, 2024 Articles & Tutorials Sponsored Why your mobile releases are a black box “What's the status of the release?” Who knows. Uncover the unseen challenges
PHP 8.4 is released, Dynamic Mailer Configuration, and more! - №540
Sunday, November 24, 2024
Your Laravel week in review ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏ ͏
Lumoz RaaS Introduces Layer 2 Solution on Move Ecosystem
Sunday, November 24, 2024
Top Tech Content sent at Noon! How the world collects web data Read this email in your browser How are you, @newsletterest1? 🪐 What's happening in tech today, November 24, 2024? The HackerNoon
😼 The hottest new AI engineer
Sunday, November 24, 2024
Plus, an uncheatable tech screen app Product Hunt Sunday, Nov 24 The Roundup This newsletter was brought to you by Countly Happy Sunday! Welcome back to another edition of The Roundup, folks. We've