's process to create 2d game assets, I began playing around with DreamBooth. And I'm surprised by how good the results are!
First I trained it with a small set of 6 pixelated wizards, like these:
So, ChatGPT understands #elixirlang and #liveview and can write complex, correct, Elixir code.
Here is a working LiveView game I built with ZERO human coding, entirely from ChatGPT prompts.
https://thetinycto.com/gpt-game
Donโt write code often?
Been a while?
Worry no more!
Introducing the โFuzzy Compilerโ
Just make up your own code, no worrying about syntax. Translates to working code w comments.
See example 1:
Generative AI is the biggest thing to happen to the game industry since 3D. If you're in games, and you're not experimenting with it, then you're already behind.
Some observations and predictions on just how revolutionary it's going to be. ๐
Acaban de subir mi charla de @ekoparty sobre nuevas tecnicas de deteccion de phishing!
They just uploaded my @ekoparty talk about new phishing-detecion techniques!
https://youtube.com/watch?v=k9jcRyFlpF8&t=74โฆ
Even with these issues, the results were fast and easy to achieve.
In their current state, they can be used to prototype games or for game jams. And it will be way better than any programmer art! ๐ So I think indie game devs will benefit most from this.
2. The top-tiling is still off. It seems that SD adds tiling to be used in a square, not a sphere or cube like here.
Here I'm looking upwards and the artifacts are clearly seen:
I needed to confirm that they actually looked good in a real game even with the low-ish resolution.
So I opened one of the Unity FPS examples, imported the textures, and the initial results were pretty good!
But there was a problem...
At first the results looked promising! I could ask for skies with 3 moons, a blood moon or a black hole.
I quickly found out that I needed more vRAM to create high-resolution images. I'm using Google Collab, with 16 GB, and the largest images I could create are 1024x512.
Continued exploring more #gameasset created with ๐ค#AI.
I had the hypothesis that #StableDiffusion would be good creating skyboxes. I trained a #Dreambooth model and began prompting!
Results, analisys, actual uses in #unity3d and problems I found down below! ๐
I designed the most incredible pack of potions with #AI (and nothing but AI - #StableDiffusion)
Mega-thread
Follow the exploration below, esp. if you're in the #gaming industry (Game dev, Game Artist, Creative Director, etc.) Content production is about to be transformed
I'm manually discarding about 50% of the results as they are pretty bad. Maybe I need a more coherent training set?
I quickly tried creating a sprite sheet for a simple animation and found no way to make a good one. The frames just break continuity. Will keep prompting!
re-trained the model with a total of 15 pixelart wizards.
Asked for a blue robed wizard and it made these pretty good results.
The faces are off, but as pixel art is not detailed on the faces I think it doesn't matter here.
To practice with stable diffusion I made its "hello world": self portraits!
It's better than starting on uncharted waters like game assets.
I trained the model with 20-ish pictures and began prompt-hacking. Thanks
I can tell that the model is not fully understanding what "casting a spell" is. Maybe I need to train it beforehand on that? Or use img2img. I'm still pretty new to this ๐
Main Track Short Talks #EKO2022@DiegoFreijo, Senior Security Engineer & Manager at @anvil_secure Hominoid: An anti-phishing plugin proof of concept
Save your seat http://ekoparty.org/r/k5j
Writing secure code with Union Types
UT are great at solving families of runtime errors. They help us define the exact shape of the data we're working with.
Example below!
But this is a "safe" if because the compiler knows the proper shape of the data, according on the branch we're at.
If we want to add a new case, the compiler will throw an error. This increases the codebase safety! ๐๐