Today starts NaNoGenMo, the annual challenge for automatic text generation. To participate you have to write and publish a code that generates a literary text at least 50K words long. I have not yet decided whether I will participate this year, but last year I took part, and now I will tell you what came of it, just to close the gestalt.
A year ago I made and submitted a combination of two neural networks: the first is a conditional GPT generator, pre-trained on a bunch of cyberpunk and cypherpunk texts, aphorisms, and complex authors like Kafka and Rumi. The…
There is an old Slavic calligraphy tradition, called ‘Russian monospaced Vyaz’ or just ‘Vyaz’. It’s unique in terms of special forms and rules of ligature generation. For example, take a look at works of a contemporary artist Viktor Pushkarev.
Once, a colleague of mine, Anna Shishlyakova drew a nice piece of Vyaz and since then I kept thinking if one can automatically generate such ligatures. Several years have passed and I finally found a weekend to make something like this:
TLDR: I took a subset of 18.5K portraits from a dataset of the Kaggle competition, Painter by Numbers, and arranged them by style and gender. Then I used the Facer library from John W. Miller to build average faces based on these portrait groups, as well as a time-lapse of average faces from the portraits dating from the Middle Ages to the 20th century. Check out my society6 page for prints of these portraits.
I used metadata from the Painter by Numbers dataset, where the subset of portraits was less than 20%. The metadata is quite detailed and convenient, including…
The rest of the post gives some clues on the process of optimization — from a clean and shiny 752-bytes rosetta-code implemetation to the end and beyond.
I already wrote something about the annual competition for automatic novel text generation, National Novel Generation Month (also known as NaNoGenMo). During November, participants have to write some computer program which generates a text with at least 50000 words and publish the source code. No other rules apply.
A few days ago, Pavel Gertman came to me with an interesting question. He asked if I know the exact dates when Conway’s Game of Life, R-pentomino and Glider were invited. They either already just turned 50 years old or it should happen in a few days and we wouldn’t miss the moment. Different sources give conflicting information, announcing the year of the appearance of Conway’s Game of Life, R-pentomino and Glider as 1969 or 1970, in different combinations. And the authorship (or rather the discovery) of Glider is attributed either to Conway or to Richard Guy.
Seven months ago, I used a pretrained neural network to detect the appearance of animals on online cameras in different nature parks, and send notifications to the @WebCamWatcher telegram channel. Now I’ll tell you a little about what happened with this venture:
Wildpark-MV, a German wild nature park near the town of Güstrow installed a dozen webcams so that anyone could watch animals in the wild. Most of the cameras are hanging in a natural forest, through which wild animals pass from time to time. And although the places for cameras were chosen meaningfully (the lake, near which bears like to wallow during the day, the place where they leave food for lynx, and so on), it is quite difficult to catch animals. …
TL;DR: I collected a large dataset of drum patterns, then used a neural network approach to map them into a latent explorable space with some recognizable genre areas. Try the interactive exploration tool or download several thousand of unique generated beats.
In the recent years there have been many projects dedicated to the neural network-generated music (including drum patterns). Some of such project use an explicit construction of a latent space in which each point corresponds to a melody. This space can then be used both to study and classify musical structures, as well as to generate new melodies with…
You could try the intractive color namespace exporation here (better to be viewed on desktop).
At first, I checked if someone had done something like this before me, and of course, I found a similar project: there, a researcher from Colorado, Janelle Shane used the 7700 color names base from the Sherwin-Williams Company, the world’s largest paint manufacturer. Then she taught char-based RNN to generate names for RGB-style components. She also wrote a follow-up later in which she tested a few more ideas.
I wasn’t happy with the quality of the results of that work even considering some obvious cherry-picking…
After making the Pianola network project, I came to the idea of the RaspberryPi music box for the infinite neural network music generation.
Basically, I used the same VAE-like neural network approach as in our Neyro-Skryabin performance, but I had to squeeze it into a raspPi3 box and tune the model to make the near real-time NN-generation of an endless stream of midi music.
As a hardware, I took a RaspberryPi 3 Starter Kit and some Leadsound’s speaker like this. As a software, I had to setup Tensorflow for ARMv7l and used mido+pygame to play the music aloud.