AI Art Newsletter - 9 May 2025
AI Art Newsletter – May 2025
Latest AI Art Tools or Platforms
-
Midjourney V7 – Midjourney released its Version 7 model in early April, bringing major quality boosts and new features. The update improves overall image fidelity (finally fixing notorious issues like distorted hands and bodies) and maps prompts more accurately to outputs. It introduced an experimental
--exp
parameter for “more detailed, dynamic looks” and is testing an Omni-Reference system that lets users lock specific characters or objects across images for consistency. These upgrades let artists re-run old prompts for “same but better” results and maintain continuity in multi-image projects. -
Stable Diffusion 3.5 – Stability AI’s latest open-source image models (released in April) are now widely available and free for most uses. The 3.5 series comes in multiple variants – including an 8.1 billion-parameter Large model and a distilled Large Turbo – offering richer image quality and faster generation. These models run on consumer GPUs and are distributed under a permissive license (allowing commercial and non-commercial use) to empower creators and developers. Together with new tools for fine-tuning and LoRA training, Stable Diffusion 3.5 continues the open-source push in generative art.
-
Recraft’s “Red Panda” Model – Startup Recraft burst onto the scene with a proprietary AI image generator tailored for branding and marketing. In May, Recraft announced a $30 million Series B funding after its model (codenamed “red panda”) quietly outperformed OpenAI’s DALL·E 3 and Midjourney on a respected industry benchmark. The San Francisco–based company (now serving 4 million users) built its text-to-image model from scratch to allow precise control over outputs (e.g. placing logos exactly), addressing a common pain point for designers. Recraft’s success underscores growing interest in domain-specific AI art tools – especially those that enable brand-compliant visuals at scale.
-
Adobe Firefly Updates – Adobe’s generative AI platform Firefly expanded with several new models and features this month. At Adobe’s London conference, the company unveiled Firefly Image Model 4 (and an enhanced “Ultra” version) which can generate 2K-resolution images with more lifelike detail and dynamic camera angles. Firefly’s much-anticipated Video Model also graduated from beta, now allowing users to create 1080p video clips from text prompts or edit videos with AI-generated camera movements. Additionally, a new Vector Model for text-to-SVG graphics is now available, catering to logo and icon creation. In a bid to make Firefly a one-stop hub, Adobe announced integration of third-party generative models like OpenAI’s image model, Google’s Imagen 3 (for text-to-image), and even Black Forest Labs’ Flux 1.1 into the Firefly web app. These updates make Firefly a versatile platform combining Adobe’s own models with the broader AI art ecosystem.
AI Art Exhibitions or Events
-
“Le Monde selon l’IA” – Paris, France: A landmark exhibition titled Le Monde selon l’IA (“The World According to AI”) opened at Jeu de Paume in Paris, bringing together 43 international artists to explore the intersections of AI and artistic creation. Billed as the first large-scale group show of its kind, it spans photography, film, video installation, sculpture, literature, and music – all examining how AI is transforming art and society. The show presents not only AI-generated works but also invites critical reflection on topics like computer vision and facial recognition, generative text and image creation, the hidden human labor behind AI, and ecological impacts. Curated by a cross-disciplinary team, Le Monde selon l’IA runs through September 21, 2025 and is accompanied by screenings, talks, and debates to help the public understand this rapidly evolving field.
-
“Machine Love” – Tokyo, Japan: The Mori Art Museum in Tokyo is staging MACHINE LOVE: Video Game, AI and Contemporary Art, an ambitious exhibition running through June 8, 2025. Featuring around 50 works, it showcases how artists are harnessing game engines, virtual reality, and generative AI to forge new aesthetics. The exhibited works – created by 12 artist/creator teams – include large-scale installations where “machines” and humans jointly produce art. Notably, several pieces utilize generative AI to push creative boundaries: for example, some works visualize hyperrealistic landscapes that blur the line between virtual and real, while others use AI avatars and characters to explore new forms of identity beyond traditional social norms. By immersing visitors in interactive digital spaces, Machine Love prompts reflection on our relationship with technology – evoking emotions from empathy to anxiety – and asks how we can co-create a better future with intelligent machines.
-
“AI: More Than Human” – Miami, USA: Opening May 31 at the Phillip and Patricia Frost Museum of Science in Miami, AI: More Than Human is a traveling exhibition that offers an immersive journey through the past, present, and future of artificial intelligence. Originally curated by London’s Barbican Centre, this highly interactive show blends science and art – inviting visitors to engage hands-on with AI installations and artworks. It features a star-studded roster of AI art pioneers and researchers, including Joy Buolamwini, Mario Klingemann, Lawrence Lek, Lauren McCarthy, Neri Oxman, Universal Everything, and many others. Through multimedia installations, the exhibition examines what it means to be human in the age of AI, touching on questions of consciousness and creativity. AI: More Than Human will run all summer (through Sept 1, 2025) and is included with general museum admission, aiming to demystify AI for general audiences in an engaging, family-friendly way.
-
Festival of Creative AI (FoCAi) – Epsom, UK: From May 19–23, the University for the Creative Arts hosted the inaugural Festival of Creative AI, a free five-day event for artists, students, and the public to explore generative AI in creative practice. The festival featured daily hands-on workshops in visual art, music, and even AI ethics, along with creative challenges and hackathons. Participants could experiment with the latest AI art tools, showcase their own AI-generated creations, and attend talks/demos by experts – both online and in person across UCA’s campuses. According to organizers, the goal was to provide “insights into the latest AI tools and trends, skill development, and engagement with experts” in a welcoming environment. FoCAi’s success (drawing a mix of professionals and curious newcomers) suggests growing interest in learning how to create art in collaboration with AI. Given positive feedback, UCA plans to make it an annual event.
Artist Spotlights
-
Dahlia Dreszer – Miami: This month, photographer Dahlia Dreszer debuted a groundbreaking exhibit that merges her cultural heritage with AI technology. Her solo show “Bringing the Outside In” (at Green Space Miami, through May 17) features vivid, maximalist still-life compositions of personal and familial objects – but with a twist. Dreszer spent the last year training an AI image generator on her own artistic style (using models like Stable Diffusion 3.5, Midjourney, Adobe Firefly, etc.) and for the first time included artworks produced entirely by AI in her exhibit. One centerpiece is a series of lush still lifes depicting layered Judaica heirlooms, flowers, and textiles from her Panamanian and Jewish background – imagery the AI learned to create in Dreszer’s signature style. In the gallery, visitors can actually interact with an AI station to generate their own images in Dreszer’s style, and an AI-driven clone of the artist (which looks and speaks like her) appears via video to guide guests through the experience. Dreszer’s optimistic embrace of AI – viewing it as just “another medium” to unlock creativity – positions her as a pioneer of AI-augmented fine art, blending human and machine imagination in a deeply personal way.
-
Refik Anadol – Global: Renowned media artist Refik Anadol continues to push the boundaries of AI art on a monumental scale. In January, Anadol’s data-driven artwork “Glacier Dreams” was permanently installed at the Kunsthaus Zürich in Switzerland. This piece transforms a vast dataset of global glacier archives (plus Anadol’s own expeditions’ data) into a dynamic visual experience – a mesmerizing “AI painting” that fluidly maps climate change’s impact on fragile ice landscapes. Meanwhile, Anadol’s studio also unveiled “Earth Dreams” at the Museum of the Future in Dubai, an immersive AI-powered exhibition that seamlessly integrates with the museum’s futuristic architecture. Unfolding in four interconnected chapters, Earth Dreams uses nature-themed datasets (e.g. flora, fauna, geoscapes) to generate otherworldly visuals – “fragmented dreamscapes” – that spill beyond the installation and onto the building itself. Anadol’s participation in panels like “The Future of Curation” alongside curators from SXSW and Art Dubai further underlines his leadership in the AI art field. By turning massive datasets into art, he’s not only creating jaw-dropping visuals but also provoking conversation about the fusion of technology, nature, and memory in public art.
Ethical or Legal Developments
-
Getty Images vs. Stability AI: A significant copyright showdown is underway between stock photo giant Getty and the makers of Stable Diffusion. In a January pre-trial ruling in London, the High Court refused to allow Getty’s case to proceed as a broad class-action on behalf of “all affected artists,” deeming that group too undefined. Instead, Getty refiled a narrower representative claim focusing on a specific set of works (those under exclusive license to Getty), after agreeing to indemnify Stability AI against any future lawsuits from other rights-holders. This move was allowed by the court, effectively giving a green light for Getty’s suit to move forward on behalf of that subset of image owners. The case – which accuses Stability AI of scraping 12 million Getty photos without permission and even replicating Getty’s watermark in AI outputs – is headed to a full trial (scheduled for mid-2025 in the UK). Artists and agencies worldwide are watching closely, as the outcome could set a precedent for how AI training data is treated under copyright law.
-
AI Training and Fair Use: In the United States, legal consensus is beginning to form around whether using copyrighted materials to train AI models counts as “fair use.” In a February ruling that sent ripples through the AI community, a Delaware judge held that an AI company’s use of copyrighted legal annotations to train a research tool was not protected by fair use. The case, Thomson Reuters v. Ross Intelligence, involved an AI-driven legal search engine that had ingested proprietary Westlaw headnotes (summaries of court cases) to improve its answers. Notably, the tool did not reproduce the headnotes verbatim to users – it only used them in the training process. However, the court drew an analogy to generative AI and concluded that even unaltered use of text for model training can infringe copyright, rejecting the idea that ingesting text without outputting it is automatically fair use. This “shot heard round the AI world” suggests that courts may not view AI training as categorically fair use, especially when the AI’s purpose could substitute for the original work. Similar U.S. lawsuits by authors and artists (from Sarah Silverman to the Authors Guild) against AI firms are ongoing, and this ruling will likely influence those cases.
-
US Copyright Office Guidance: Facing these new dilemmas, the U.S. Copyright Office (USCO) released Part II of its report on Copyright and AI in early 2025, providing some clarity on policy. The Copyright Office reaffirmed that human authorship remains the bedrock of copyright – works produced entirely by autonomous AI cannot be copyrighted, but AI-_assisted_ works can be, if a human’s contributions are “sufficiently creative” and not trivial. This means that simply tweaking an AI-generated image might not qualify, whereas a meaningful human edit or guidance could. The USCO report gave examples but acknowledged ambiguity in where that line is drawn. The upshot is that many purely AI-generated images, text, or music are effectively public domain by default under current law. This has big implications: businesses using generative AI may resort to contracts and trade secrets to protect AI-created assets, since copyright won’t automatically apply. The Copyright Office did not call for new AI-specific copyright legislation yet, suggesting that existing law mostly suffices for now – but Part III of the report (due later in 2025) will tackle the thorny issue of AI training data and liability.
-
A visitor observes a series of AI-generated portraits by artist Pindar van Arman at Christie’s “Augmented Intelligence” auction preview in New York (February 2025). The first-ever major AI art auction at Christie’s (held in March) underscored the ongoing ethical debates in the art world. Titled “Augmented Intelligence”, the online sale featured 34 lots of AI-created art. The results were mixed: while a piece by Refik Anadol sold for $277,200 (the highest of the auction), over a third of the lots failed to meet their reserve prices, and total sales fell short of expectations. Meanwhile, outside the auction, a coalition of artists protested – an open letter with over 6,000 signatures urged Christie’s to cancel the sale, arguing that several works were made with AI models trained on copyrighted art without permission. Christie’s went ahead, framing the auction as showcasing the “influence and importance” of AI artists, but the controversy highlighted unresolved questions of consent, credit, and authenticity in AI-generated art. The episode has prompted more discussions on setting ethical guidelines for selling AI art (some asking: is it too soon for auctions?), and demonstrates the tension between innovation and intellectual property that the art market must navigate.
Tutorials or Tips for AI Art Enthusiasts
-
Midjourney Omni-Reference Guide: The new Omni-Reference feature in Midjourney V7 is a game-changer for keeping a consistent subject across multiple images. A recent tutorial by AI artist Christie C. breaks down how to use this tool. In essence, Omni-Reference lets you upload a reference image (say, a character or object) and then lock in its appearance in all your generated images – perfect for storytelling or branding. Pro tip: adjust the strength of the reference with the
--ow
parameter (Omni-weight, range 0–1000) to control how strictly the AI sticks to your reference. The guide also covers advanced tricks, like maintaining multiple consistent elements (for example, a specific character and a specific prop together) and using personalization tokens for finer control. If you’re looking to create a series of images with the same character or product, exploring Omni-Reference is highly recommended. -
Better Prompts = Better Art: Crafting the right text prompt remains an essential skill for AI artists. Stability AI’s official Stable Diffusion 3.5 Prompt Guide offers some helpful pointers on structuring prompts to get the results you want. The guide suggests thinking of your prompt as a recipe with clear ingredients: specify the style (e.g. “watercolor illustration” or “cinematic photograph”), the subject (what or who should be in the image, with any actions), the composition or camera framing (close-up, wide shot, perspective), and even lighting and color details (soft morning light, neon glow, etc.). By explicitly describing these elements, you give the AI more to work with and align it with your creative vision. For example, instead of just saying “a castle,” you might prompt “a medieval oil painting of a mountain castle at sunset, dramatic lighting, detailed clouds in the sky.” According to the guide, such structured prompts tend to yield more precise and vivid images. It’s a great reminder that prompt engineering is an art in itself – experiment with different adjectives, styles, and details, and don’t be afraid to iterate.
-
Adobe Firefly “Remix” Tips: If you’re using Adobe’s Firefly, check out the new Firefly Boards feature, which acts as a creative sandbox for mixing images. For instance, you can drop in a few of your own photos or sketches, and use the Remix function – Firefly’s AI will analyze the visuals, auto-generate a combined prompt, and produce mash-up variations that blend elements and styles from the source images. This is a fun way to ideate new concepts (like merging two different art styles or themes) without writing a prompt from scratch. A tip from Adobe: try selecting images with clear, distinct styles or motifs before hitting Remix – the more contrast between them, the more surprising and creative the generated outcomes. Firefly Boards is still in beta, but it opens up a playful approach to AI image generation that feels akin to collage-making, powered by AI. It’s especially handy for graphic designers creating moodboards or storytellers exploring visual themes.
Notable AI Artworks in May 2025
-
Dahlia Dreszer’s AI Still Lifes: Among the works in Dahlia Dreszer’s Miami exhibition was a series of still-life prints that stand out as a collaboration between her and her AI “other”. These pieces – filled with heirloom objects, flowers and textiles – were actually generated by an AI model trained on Dreszer’s own photographs, then curated and refined by the artist. The result is a set of uncanny yet beautiful images that feel authentically “hers,” yet were born from a machine’s imagination. The exhibit treats the collection as a “living organism,” even featuring an AI avatar of Dreszer to discuss each artwork with visitors. It’s a compelling example of an AI-generated artwork not just as a gimmick, but as an integral part of an artist’s evolving body of work – raising questions about co-authorship and originality.
-
Refik Anadol’s ISS Dreams Sells Big: One of the highest-profile AI artworks this spring was Refik Anadol’s “Machine Hallucinations – ISS Dreams – A”, a video artwork that visualizes years of International Space Station imagery through Anadol’s AI algorithms. In Christie’s Augmented Intelligence auction, this piece became the top lot, selling for $277,200 (well above its estimate). ISS Dreams is part of Anadol’s ongoing series where AI “dreams” via vast datasets – in this case, archival NASA photographs of space – creating an abstract, swirling animation that feels like a cosmic memory. The strong sale suggests that blue-chip collectors are increasingly seeing long-term value in AI art, especially works by well-established digital artists like Anadol. It’s worth noting this isn’t the first AI artwork to hit six figures, but it is one of the priciest auction results for a piece created with generative AI, signaling confidence in the medium’s artistic merit.
-
AI Short Films with Runway Gen-4: In the realm of moving images, May saw buzz around a series of short films made entirely with AI. Runway, a leader in AI video tools, showcased the capabilities of its new Gen-4 model by producing several experimental films – including “NYC is a Zoo” and “The Herd” – that test the model’s narrative storytelling potential. These films were created by feeding scripts and storyboards into Runway’s generative system, which then output sequences of video frames. The results are surreal mini-movies where human actors and real sets are nowhere to be found – instead, AI-generated characters and landscapes carry the story. While still a bit rough around the edges, the Gen-4 films demonstrate rapid progress in text-to-video technology. They hint at a future where indie creators could produce full animations or music videos with AI as the primary “camera crew.” The films were discussed at industry events and have been made available online, inspiring debate about AI’s role in filmmaking – is it a new frontier for creativity or a threat to human animators? Possibly both, but as pure AI-created art, these shorts were a milestone worth noting.