I use algorithms called neural networks to write humor. What’s fun about neural networks is they learn by example – give them a bunch of some sort of data, and they’ll try to figure out rules that let them imitate it. They power corporate finances, recognize faces, translate text, and more. I, however, like to give them silly datasets. I’ve trained neural networks to generate new paintcolors, new Halloween costumes, and new candy heart messages. When the problem is tough, the results are mixed (there was that one candy heart that just said HOLE).
One of the toughest problems I’ve ever tried? Knitting patterns.
I knew almost nothing about knitting when @JohannaB@wandering.shop sent me the suggestion one day. She sent me to the Ravelry knitting site, and to its adults-only, often-indecorous LSG forum, who as you will see are amazing people. (When asked how I should describe them, one wrote “don’t forget the glitter and swearing!”)
The knitters helped me crowdsource a dataset of 500 knitting patterns, ranging from hats to squids to unmentionables. JC Briar exported another 4728 patterns from the site stitch-maps.com.
I gave the knitting patterns to a couple of neural networks that I collectively named “SkyKnit”. Then, not knowing if they had produced anything remotely knittable, I started posting the patterns. Here’s an early example.
MrsNoddyNoddy wrote, “it’s difficult to explain why 6395, 71, 70, 77 is so asthma-inducingly funny.” (It seems that a 6000-plus stitch count is, as GloriaHanlon put it, “optimism”).
As training progressed, and as I tried some higher-performance models, SkyKnit improved. Here’s a later example.
Even at its best, SkyKnit had problems. It would sometimes repeat rows, or leave them out entirely. It could count rows fairly reliably up to about 22, but after that would start haphazardly guessing random largish numbers. SkyKnit also had trouble counting stitches, and would confidently declare at the end of certain lines that it contained 12 stitches when it was nothing of the sort.
But the knitters began knitting them. This possibly marks one of the few times in history when a computer generated code to be executed by humans.
The knitters didn’t follow SkyKnit’s directions exactly, as it turns out. For most of its patterns, doing them exactly as written would result in the pattern immediately unraveling (due to many dropped stitches), or turning into long whiplike tentacles (due to lots of leftover stitches). Or, to make the row counts match up with one another, they would have had to keep repeating the pattern until they’d reached a multiple of each row count – sometimes this was possible after a few repeats, while other times they would have had to make the pattern tens of thousands of stitches long. And other times, missing rows made the directions just plain impossible.
So, the knitters just started fixing SkyKnit’s patterns.
Knitters are very good at debugging patterns, as it turns out. Not only are there a lot of knitters who are coders, but debugging is such a regular part of knitting that the complicated math becomes second nature. Notation is not always consistent, some patterns need to be adjusted for size, and some simply have mistakes. The knitters were used to taking these problems in stride. When working with one of SkyKnit’s patterns, GloriaHanlon wrote, “I’m trying not to fudge too much, basically working on the principle that the pattern was written by an elderly relative who doesn’t speak much English.”
Each pattern required a different debugging approach, and sometimes knitters would each produce their own very different-looking versions. Here are three versions of “Paw Not Pointed 2 Stitch 2″.
Once, knitter MeganAnn came across a stitch that didn’t even exist (something SkyKnit called ’pbk’). So she had to improvise. “I googled it and went with the first definition I got, which was ‘place bead and knit’.” The resulting pattern is “Ribbed Rib Rib” below (note bead).
Even debugged, the patterns were weird. Like, really, really nonhumanly weird.
“I love how organic it comes out,“ wrote Vastra. SylviaTX agreed, loving “the organic seeming randomness. Like bubbles on water or something,”
SkyKnit’s patterns were also a pain. Michaela112358 called Row 15 of Mystery Lace (above) “a bit of a head melter”, commenting that it “lacked the rhythm that you tend to get with a normal pattern”. Maeve_ish wrote that Shetland Bird Pat “made my brain hurt so I went to bed.” ShoelessJane asked, “Okay, now who here has read Snow Crash?”
“I was laughing a few days ago because I was trying to math a Skyknit pattern and my brain…froze. Like, no longer could number at all. I stared blankly at my scribbles and at the screen wondering what had happened til somehow I rebooted. Yup, Skyknit crashed my brain.” – Rayn63
On the pattern SkyKnit called “Cherry and Acorns Twisted To”:
“Couple notes on the knitting experience, which while funny wasn’t terribly pleasurable: Because there’s no rhythm or symmetry to the pattern, I felt I was white-knuckling it through each line, really having to concentrate. There are also some stitch combinations that aren’t very comfortable to execute physically, YO, SSK in particular.
That said, I’m nearly tempted to add a bit of random AI lace to a project, perhaps as cuffs on a sweater or a short-row lace panel in part of a scarf, like Sylvia McFadden does in many of her shawl designs. As another person in the thread said, it would add a touch of spider-on-LSD.” –SarahScully
“Four repeats in to this oddball, daintily alien-looking 8-row lace pattern, and I have, improbably, begun to internalize it and get in to a rhythm like every other lace pattern.
I still have a lingering suspicion that I’m knitting a pattern that could someday communicate to an AI that I want to play a game of Global Thermonuclear War, but I suppose at least I’ll have a scarf at the end of it?” –BridgetJ
There was also this beauty of a pattern, that SkyKnit called “Tiny Baby Whale Soto”. GloriaHanlon managed somehow to knit it and described it as “a bona fide eldritch horror. Think Slenderman meets Cthulu and you wouldn’t be far wrong.”
Other than being a bit afraid of Tiny Baby Whale Soto, the knitters seem happy to do the bidding of SkyKnit, brain melts and all.
“I cast on for a lovely MKAL with a designer I totally trust and became immediately suspicious because the pattern made sense. All rows increase in an orderly manner. There are no “huh?” moments. There are no maths at all…it has all been done for me. I thought I would be happy, yo. Instead, I am kind of missing the brain scrambling and I keep looking for pigs and tentacles. Go figure.” – Rayn63
Check out the rest of the SkyKnit-generated patterns, and the glorious rainbow of weird test-knits at SkyKnit: The Collection and InfiKnit.
If you feel so inspired (and don’t mind the kind-hearted yet vigorous swearing), join the conversation on the LSG Ravelry SkyKnit thread – many of SkyKnit’s creations have not yet been test-knit at all, and others transform with every new knitter’s interpretation. Compare notes, commiserate, and do SkyKnit’s inscrutable bidding!
Heck yeah there is bonus material this week. Have some neural net-generated knitting & crochet titles. Some of them are mixed with metal band names for added creepiness. Enter your email here to get more like these:
Chicken Shrug Snuggle Features Cartube Party Filled Booties Corm Fullenflops Womp Mittens Socks of Death Tomb of Sweater Shawl Ruins
The reasons so many neurodivergent people relate to narratives about AI are worlds different from the reasons allistic people dehumanize them with comparisons to robots.
One comes from a place of “I don’t see you as fully human, just a flawed facsimile of one, lacking vital components of full personhood and tragically incomplete”. The other comes from a place of “the rules weren’t created with me in mind, and so narratives about alternative forms of cognition and being resonate with me deeply”. It’s less a case of people ‘dehumanizing themselves’, than it is recognizing how their differences can alienate them from other people.
Robot stories, pretty much by definition, explore concepts like empathy, emotions, affect, and, more generally, “social instincts”. I can’t provide statistics for this claim, but for the most part, robot stories are about being sympathetic and accepting of ‘otherness’ in these respects. Sometimes…. this isn’t executed very well. At all. Which is why I write two-thousand word rants about Star Trek and emotion chips. But for people who struggle with these things, or in whom they manifest differently, stories about ‘alternative personhood’ are in some ways more accessible and relatable than stories about characters whose personhood is never questioned.
So, rather than purge all associations between AI and neurodivergence because it’s.. inherently harmful and dehumanizing or something, I would rather see autistic and other neurodivergent people encouraged to explore these narratives from their own perspectives.
Machine learning algorithms are not like other computer programs. In the usual sort of programming, a human programmer tells the computer exactly what to do. In machine learning, the human programmer merely gives the algorithm the problem to be solved, and through trial-and-error the algorithm has to figure out how to solve it.
This often works really well – machine learning algorithms are widely used for facial recognition, language translation, financial modeling, image recognition, and ad delivery. If you’ve been online today, you’ve probably interacted with a machine learning algorithm.
But it doesn’t always work well. Sometimes the programmer will think the algorithm is doing really well, only to look closer and discover it’s solved an entirely different problem from the one the programmer intended. For example, I looked earlier at an image recognition algorithm that was supposed to recognize sheep but learned to recognize grass instead, and kept labeling empty green fields as containing sheep.
When machine learning algorithms solve problems in unexpected ways, programmers find them, okay yes, annoying sometimes, but often purely delightful.
So delightful, in fact, that in 2018 a group of researchers wrote a fascinating paper that collected dozens of anecdotes that “elicited surprise and wonder from the researchers studying them”. The paper is well worth reading, as are the original references, but here are several of my favorite examples.
Bending the rules to win
First, there’s a long tradition of using simulated creatures to study how different forms of locomotion might have evolved, or to come up with new ways for robots to walk.
Why walk when you can flop? In one example, a simulated robot was supposed to evolve to travel as quickly as possible. But rather than evolve legs, it simply assembled itself into a tall tower, then fell over. Some of these robots even learned to turn their falling motion into a somersault, adding extra distance.
[Image: Robot is simply a tower that falls over.]
Why jump when you can can-can? Another set of simulated robots were supposed to evolve into a form that could jump. But the programmer had originally defined jumping height as the height of the tallest block so – once again – the robots evolved to be very tall. The programmer tried to solve this by defining jumping height as the height of the block that was originally the *lowest*. In response, the robot developed a long skinny leg that it could kick high into the air in a sort of robot can-can.
[Image: Tall robot flinging a leg into the air instead of jumping]
Hacking the Matrix for superpowers
Potential energy is not the only energy source these simulated robots learned to exploit. It turns out that, like in real life, if an energy source is available, something will evolve to use it.
Floating-point rounding errors as an energy source: In one simulation, robots learned that small rounding errors in the math that calculated forces meant that they got a tiny bit of extra energy with motion. They learned to twitch rapidly, generating lots of free energy that they could harness. The programmer noticed the problem when the robots started swimming extraordinarily fast.
Harvesting energy from crashing into the floor: Another simulation had some problems with its collision detection math that robots learned to use. If they managed to glitch themselves into the floor (they first learned to manipulate time to make this possible), the collision detection would realize they weren’t supposed to be in the floor and would shoot them upward. The robots learned to vibrate rapidly against the floor, colliding repeatedly with it to generate extra energy.
[Image: robot moving by vibrating into the floor]
Clap to fly: In another simulation, jumping bots learned to harness a different collision-detection bug that would propel them high into the air every time they crashed two of their own body parts together. Commercial flight would look a lot different if this worked in real life.
Discovering secret moves: Computer game-playing algorithms are really good at discovering the kind of Matrix glitches that humans usually learn to exploit for speed-running. An algorithm playing the old Atari game Q*bert discovered a previously-unknown bug where it could perform a very specific series of moves at the end of one level and instead of moving to the next level, all the platforms would begin blinking rapidly and the player would start accumulating huge numbers of points.
A Doom-playing algorithm also figured out a special combination of movements that would stop enemies from firing fireballs – but it only works in the algorithm’s hallucinated dream-version of Doom. Delightfully, you can play the dream-version here
[Image: Q*bert player is accumulating a suspicious number of points, considering that it’s not doing much of anything]
Shooting the moon: In one of the more chilling examples, there was an algorithm that was supposed to figure out how to apply a minimum force to a plane landing on an aircraft carrier. Instead, it discovered that if it applied a *huge* force, it would overflow the program’s memory and would register instead as a very *small* force. The pilot would die but, hey, perfect score.
Destructive problem-solving
Something as apparently benign as a list-sorting algorithm could also solve problems in rather innocently sinister ways.
Well, it’s not unsorted: For example, there was an algorithm that was supposed to sort a list of numbers. Instead, it learned to delete the list, so that it was no longer technically unsorted.
Solving the Kobayashi Maru test:Another algorithm was supposed to minimize the difference between its own answers and the correct answers. It found where the answers were stored and deleted them, so it would get a perfect score.
How to win at tic-tac-toe: In another beautiful example, in 1997 some programmers built algorithms that could play tic-tac-toe remotely against each other on an infinitely large board. One programmer, rather than designing their algorithm’s strategy, let it evolve its own approach. Surprisingly, the algorithm suddenly began winning all its games. It turned out that the algorithm’s strategy was to place its move very, very far away, so that when its opponent’s computer tried to simulate the new greatly-expanded board, the huge gameboard would cause it to run out of memory and crash, forfeiting the game.
In conclusion
When machine learning solves problems, it can come up with solutions that range from clever to downright uncanny.
Biological evolution works this way, too – as any biologist will tell you, living organisms find the strangest solutions to problems, and the strangest energy sources to exploit. Sometimes I think the surest sign that we’re not living in a computer simulation is that if we were, some microbe would have learned to exploit its flaws.
So as programmers we have to be very very careful that our algorithms are solving the problems that we meant for them to solve, not exploiting shortcuts. If there’s another, easier route toward solving a given problem, machine learning will likely find it.
Fortunately for us, “kill all humans” is really really hard. If “bake an unbelievably delicious cake” also solves the problem and is easier than “kill all humans”, then machine learning will go with cake.
“Sometimes I think the surest sign that we’re not living in a computer simulation is that if we were, some microbe would have learned to exploit its flaws.”