qc2

Just how good are the new wave of AI image generation tools?

AI-generated imagery is here. Type a simple description of what you want to see into a computer and beautiful illustrations, sketches, or photographs pop up a few seconds later. By harnessing the power of machine learning, high-end graphics hardware is now capable of creating impressive, professional-grade artwork with minimal human input. But how could this affect video games? Modern titles are extremely art-intensive, requiring countless pieces of texture and concept art. If developers could harness this tech, perhaps the speed and quality of asset generation could radically increase.

However, as with any groundbreaking technology, there's plenty of controversy too: what role does the artist play if machine learning can generate high quality imagery so quickly and so easily? And what of the data used to train these AIs - is there an argument that machine learning-generated images are created by effectively passing off the work of human artists? There are major ethical questions to grapple with once these technologies reach a certain degree of effectiveness - and based on the rapid pace of improvement I've seen, the questions may need to be addressed sooner rather than later.

In the meantime, the focus of this piece is to see just how effective these technologies are right now. I tried three of the leading AI generators: DALL-E 2, Stable Diffusion, and Midjourney. You can see the results of these technologies in the embedded video below (and indeed in the collage at the top of this page) but to be clear, I generated all of them, either by using their web portals or else running them directly on local hardware.

Read more


Nguồn: Eurogamer
Mới hơn Cũ hơn
Chưa có bình luận
Add Comment
comment url