TLDR: The pre-order of my book, Paranoid Transformer, completely generated by a bunch of neural networks, is now open. In this post you can find some story behind it.

Image for post
Image for post

Today starts NaNoGenMo, the annual challenge for automatic text generation. To participate you have to write and publish a code that generates a literary text at least 50K words long. I have not yet decided whether I will participate this year, but last year I took part, and now I will tell you what came of it, just to close the gestalt.

A year ago I made and submitted a combination of two neural networks: the first is a conditional GPT generator, pre-trained on a bunch of cyberpunk and cypherpunk texts, aphorisms, and complex authors like Kafka and Rumi. The second is a BERT filter, which rejects boring and clumsy phrases and keeps valid and shiny ones. I trained this filter on a manual markup, the main part of which was done by Ivan Yamshchikov. In the end, it turned out as a pretty good generator of cyber-paranoid delusions in English. …


Image for post
Image for post

TLDR: I tried to automate the formation of ligatures in style of Slavic calligraphy, called a monospaced Vyaz. A gallery with examples of results can be found here. Details are below.

Intro

There is an old Slavic calligraphy tradition, called ‘Russian monospaced Vyaz’ or just ‘Vyaz’. It’s unique in terms of special forms and rules of ligature generation. For example, take a look at works of a contemporary artist Viktor Pushkarev.

Once, a colleague of mine, Anna Shishlyakova drew a nice piece of Vyaz and since then I kept thinking if one can automatically generate such ligatures. Several years have passed and I finally found a weekend to make something like this:


Image for post
Image for post

Averaged faces from classical paintings

TLDR: I took a subset of 18.5K portraits from a dataset of the Kaggle competition, Painter by Numbers, and arranged them by style and gender. Then I used the Facer library from John W. Miller to build average faces based on these portrait groups, as well as a time-lapse of average faces from the portraits dating from the Middle Ages to the 20th century. Check out my society6 page for prints of these portraits.

Details

I used metadata from the Painter by Numbers dataset, where the subset of portraits was less than 20%. The metadata is quite detailed and convenient, including authors, styles, titles, and years of creation. After filtering, I had about 18.5K paintings declared as portraits. …


TLDR: As a part of a funny online competition called Nano-NaNoGenMo we managed to craft the 123 bytes Perl script for a Markov Chain text generation; 139 bytes in total with a shell command (and even shorter if we allow some input file tweaking, see the remark at the very bottom of the post).

Image for post
Image for post

The rest of the post gives some clues on the process of optimization — from a clean and shiny 752-bytes rosetta-code implemetation to the end and beyond.

COMPETITION AND RULES

I already wrote something about the annual competition for automatic novel text generation, National Novel Generation Month (also known as NaNoGenMo). During November, participants have to write some computer program which generates a text with at least 50000 words and publish the source code. No other rules apply.

But this year it had a twist — the narratology enthusiast from MIT, Nick Montfort, decided to run a spin-off competition, Nano-NaNoGenMo (aka NNNGM), with only one additional rule — the program should have 256 bytes at most (with the possibility of using any of Project Gutenberg files as input). …


Image for post
Image for post

A few days ago, Pavel Gertman came to me with an interesting question. He asked if I know the exact dates when Conway’s Game of Life, R-pentomino and Glider were invited. They either already just turned 50 years old or it should happen in a few days and we wouldn’t miss the moment. Different sources give conflicting information, announcing the year of the appearance of Conway’s Game of Life, R-pentomino and Glider as 1969 or 1970, in different combinations. And the authorship (or rather the discovery) of Glider is attributed either to Conway or to Richard Guy.

From pieces, I got this picture: Conway formulated the rules of his Game of Life in the first half of 1969 in Cambridge, and at first iterated different initial patterns manually (on paper and board), but this process turned out to be difficult and boring. Therefore, he asked for help the Cambridge computer center, so Steve Bourne (the author of the first Unix shell, Bourne shell aka sh) and Mike Guy, both were working on the ALGOL 68C there at that moment, came to help Conway. They wrote a program for PDP-7, which facilitates the calculation of generations of Life, and together they began to observe the development of various combinations. At that time, they were especially occupied with r-pentomino, which demonstrates chaotic dynamics in the first 1000+ generations. Towards the end of the summer, Mike’s father, mathematician Richard Guy, joined the experiments, and it was he who, according to Conway’s memories, “at the very end of the fall” of 1969 accidentally noticed that at the 69th generation of R-pentomino development something interesting forms from chaos: a combination that moves steadily along space with period 4 and speed c/4. …


Seven months ago, I used a pretrained neural network to detect the appearance of animals on online cameras in different nature parks, and send notifications to the @WebCamWatcher telegram channel. Now I’ll tell you a little about what happened with this venture:

  • The channel gained a small but constant audience, which has organized an additional @WCWfriends chat room to discuss photos caught by the “bot.”
  • I added several cameras from African national parks, which enlivened the set of pictures.
  • Dima Kryukov asked me to make a similar thing for cameras on Russian rivers in order to detect boats passing by. …


Image for post
Image for post

Wildpark-MV, a German wild nature park near the town of Güstrow installed a dozen webcams so that anyone could watch animals in the wild. Most of the cameras are hanging in a natural forest, through which wild animals pass from time to time. And although the places for cameras were chosen meaningfully (the lake, near which bears like to wallow during the day, the place where they leave food for lynx, and so on), it is quite difficult to catch animals. …


Percussion Beats And Where To Find Them

TL;DR: I collected a large dataset of drum patterns, then used a neural network approach to map them into a latent explorable space with some recognizable genre areas. Try the interactive exploration tool or download several thousand of unique generated beats.

Context Overview

In the recent years there have been many projects dedicated to the neural network-generated music (including drum patterns). Some of such project use an explicit construction of a latent space in which each point corresponds to a melody. This space can then be used both to study and classify musical structures, as well as to generate new melodies with specified characteristics. Some others used less complex techniques, such as “language model” approaches. …


TLDR: As part of a series of experiments on generating text with neural networks, at some point, I came up with the not-so-fresh idea of the colors names generation.

Image for post
Image for post

You could try the intractive color namespace exporation here (better to be viewed on desktop).

And now for some more details:

At first, I checked if someone had done something like this before me, and of course, I found a similar project: there, a researcher from Colorado, Janelle Shane used the 7700 color names base from the Sherwin-Williams Company, the world’s largest paint manufacturer. Then she taught char-based RNN to generate names for RGB-style components. She also wrote a follow-up later in which she tested a few more ideas.

I wasn’t happy with the quality of the results of that work even considering some obvious cherry-picking, but I had some ideas of mine that I decided to check on my own. In this post, I release the first part of my results, and I will post some follow-up if everything works out later. …


Image for post
Image for post

After making the Pianola network project, I came to the idea of the RaspberryPi music box for the infinite neural network music generation.

Basically, I used the same VAE-like neural network approach as in our Neyro-Skryabin performance, but I had to squeeze it into a raspPi3 box and tune the model to make the near real-time NN-generation of an endless stream of midi music.

As a hardware, I took a RaspberryPi 3 Starter Kit and some Leadsound’s speaker like this. As a software, I had to setup Tensorflow for ARMv7l and used mido+pygame to play the music aloud.

Here it is in…

About

Aleksey Tikhonov

http://altsoph.com, Data Analyst @ Yandex

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store