it’s an ideal height distribution tbh because then whenever bruce, as an adult, is talking about how larger-than-life his father was everyone just feels bittersweet about it because the last time he saw his father he was a tiny boy and it just seems like, “oh, bruce’s memory of his father is always trapped in this time when his dad seemed like a giant”
but no, that has nothing to do with it, bruce is being completely factually correct and thomas wayne was enormous
“I assume your dad’s going to be the one that looks like you,” Clark said, adjusting his glasses as he scanned the crowd beneath the mezzanine.
“Just look for the biggest guy here,” Bruce said flatly.
Clark fought a smile.
“What.”
“Nothing! Nothing.”
Bruce waited.
“It’s just—you know.”
Bruce said nothing.
“You haven’t seen him since you were twelve.”
“Correct.”
“You maybe weren’t the tallest kid.”
Bruce said nothing.
“I’m just going to look for the guy who looks like you, rather than going by relative size.”
“And you must be the fellows who were chit-chatting with my wife!” came a voice, booming and boisterous as arms were thrown around each of their shoulders. Clark jumped; Bruce flinched.
Thomas Wayne was a good two inches taller than Clark, who was himself an inch taller than Bruce. Thomas had a glass of champagne in his right hand, which he had not spilled on Clark. There was a ping-pong ball floating in it. He had a half-empty bottle of wine in his left hand, which he had not spilled on Bruce. Between the fingers of his left hand dangled a bag of red plastic cups, unopened.
No one in the ballroom was using a red plastic cup.
Thomas’ coat and the top buttons of his shirt were undone; his bowtie had not been a bow in quite some time.
“Martha wouldn’t tell me what exactly it is you were up to,” he said cheerfully, “which I can only assume means I’d hate it!” He paused, squinting at Clark. “Oh, she must have loved you.” He gave Clark a proper once-over, down to his shoes and back up again. “Were you raised on a farm or what?”
“Why does everyone keep asking—”
“Anyway,” Thomas continued, somehow managing to pound them both on the back as he disengaged despite still having his hands full. “You two go on ahead and keep not telling me what you’re doing, if you need me I’m heading downstairs to set up a game of wine pong. It’s like beer pong, but if you’re doing it right it costs several thousand dollars! And it’s good for your heart! I’d know. I’m a doctor.”
He downed his glass of champagne and caught the ball in his teeth. He then somehow managed to arrange the items in his hands such that he could shoot them both fingerguns, clicking around the ball and waggling his eyebrows.
They watched as he slid sideways down the banister.
“I apologize for doubting your memory,” Clark said finally.
“Hm.”
“I feel like this explains a lot about your sense of humor.”
“I’m not convinced that it does.”
“… does he look how you remember?” Clark ventured.
“Usually I remember the way he looked one specific summer when I was a kid,” Bruce said thoughtfully.
Clark softened, almost reached out to put a hand on his shoulder. Then he narrowed his eyes. “No.”
“Hm?”
“I know what you’re doing, and we’re not doing it.”
“You asked.”
“I recognize that look.”
“This is just what my face looks like.”
“You’re going to make me think we’re having a moment so I let my guard down for the punchline,” Clark said, “and you’re not going to say it like it’s a punchline, so when I laugh, I look like an asshole.”
“I have no idea what you’re talking about.”
“I’m not allowed to laugh about this. You know I’m not.”
They were silent, the sounds of the party surrounding them from below.
“He had a horrible moustache,” Bruce said.
Clark pressed his knuckles to his mouth.
“I think my subconscious is trying to make death seem like a mercy.”
Clark made a muffled and hideous noise.
“Clark,” Diana scolded, and they turned to see her frowning as she approached. “This is a very difficult mission for Bruce, you mustn’t laugh.”
Clark threw up his hands in disgust.
“Or—wait.” Diana looked between them. “Was he doing it again?”
Clark nodded, lips pressed into a thin line.
“I think I remember this party,” Bruce said suddenly, looking out at the ballroom.
“What?” Clark and Diana asked simultaneously.
“It’s the one where that senator got thrown out of a window.” He pointed toward a commotion downstairs.
“What is your father doing?” Diana asked, leaning over a railing.
There was a crash of shattering glass, a series of screams, and scattered applause.
And he’ll insist he’ll be fine, “cause he’s a doctor” ?
Thomas raised an eyebrow with a level of disdain achievable only by those born to great wealth, and not at all befitting a man in the middle of using a meat cleaver to cut the nozzle off a garden hose. “Oh, I think I can handle it,” he scoffed. “I went to Yale.”
Machine learning algorithms are not like other computer programs. In the usual sort of programming, a human programmer tells the computer exactly what to do. In machine learning, the human programmer merely gives the algorithm the problem to be solved, and through trial-and-error the algorithm has to figure out how to solve it.
This often works really well – machine learning algorithms are widely used for facial recognition, language translation, financial modeling, image recognition, and ad delivery. If you’ve been online today, you’ve probably interacted with a machine learning algorithm.
But it doesn’t always work well. Sometimes the programmer will think the algorithm is doing really well, only to look closer and discover it’s solved an entirely different problem from the one the programmer intended. For example, I looked earlier at an image recognition algorithm that was supposed to recognize sheep but learned to recognize grass instead, and kept labeling empty green fields as containing sheep.
When machine learning algorithms solve problems in unexpected ways, programmers find them, okay yes, annoying sometimes, but often purely delightful.
So delightful, in fact, that in 2018 a group of researchers wrote a fascinating paper that collected dozens of anecdotes that “elicited surprise and wonder from the researchers studying them”. The paper is well worth reading, as are the original references, but here are several of my favorite examples.
Bending the rules to win
First, there’s a long tradition of using simulated creatures to study how different forms of locomotion might have evolved, or to come up with new ways for robots to walk.
Why walk when you can flop? In one example, a simulated robot was supposed to evolve to travel as quickly as possible. But rather than evolve legs, it simply assembled itself into a tall tower, then fell over. Some of these robots even learned to turn their falling motion into a somersault, adding extra distance.
[Image: Robot is simply a tower that falls over.]
Why jump when you can can-can? Another set of simulated robots were supposed to evolve into a form that could jump. But the programmer had originally defined jumping height as the height of the tallest block so – once again – the robots evolved to be very tall. The programmer tried to solve this by defining jumping height as the height of the block that was originally the *lowest*. In response, the robot developed a long skinny leg that it could kick high into the air in a sort of robot can-can.
[Image: Tall robot flinging a leg into the air instead of jumping]
Hacking the Matrix for superpowers
Potential energy is not the only energy source these simulated robots learned to exploit. It turns out that, like in real life, if an energy source is available, something will evolve to use it.
Floating-point rounding errors as an energy source: In one simulation, robots learned that small rounding errors in the math that calculated forces meant that they got a tiny bit of extra energy with motion. They learned to twitch rapidly, generating lots of free energy that they could harness. The programmer noticed the problem when the robots started swimming extraordinarily fast.
Harvesting energy from crashing into the floor: Another simulation had some problems with its collision detection math that robots learned to use. If they managed to glitch themselves into the floor (they first learned to manipulate time to make this possible), the collision detection would realize they weren’t supposed to be in the floor and would shoot them upward. The robots learned to vibrate rapidly against the floor, colliding repeatedly with it to generate extra energy.
[Image: robot moving by vibrating into the floor]
Clap to fly: In another simulation, jumping bots learned to harness a different collision-detection bug that would propel them high into the air every time they crashed two of their own body parts together. Commercial flight would look a lot different if this worked in real life.
Discovering secret moves: Computer game-playing algorithms are really good at discovering the kind of Matrix glitches that humans usually learn to exploit for speed-running. An algorithm playing the old Atari game Q*bert discovered a previously-unknown bug where it could perform a very specific series of moves at the end of one level and instead of moving to the next level, all the platforms would begin blinking rapidly and the player would start accumulating huge numbers of points.
A Doom-playing algorithm also figured out a special combination of movements that would stop enemies from firing fireballs – but it only works in the algorithm’s hallucinated dream-version of Doom. Delightfully, you can play the dream-version here
[Image: Q*bert player is accumulating a suspicious number of points, considering that it’s not doing much of anything]
Shooting the moon: In one of the more chilling examples, there was an algorithm that was supposed to figure out how to apply a minimum force to a plane landing on an aircraft carrier. Instead, it discovered that if it applied a *huge* force, it would overflow the program’s memory and would register instead as a very *small* force. The pilot would die but, hey, perfect score.
Destructive problem-solving
Something as apparently benign as a list-sorting algorithm could also solve problems in rather innocently sinister ways.
Well, it’s not unsorted: For example, there was an algorithm that was supposed to sort a list of numbers. Instead, it learned to delete the list, so that it was no longer technically unsorted.
Solving the Kobayashi Maru test:Another algorithm was supposed to minimize the difference between its own answers and the correct answers. It found where the answers were stored and deleted them, so it would get a perfect score.
How to win at tic-tac-toe: In another beautiful example, in 1997 some programmers built algorithms that could play tic-tac-toe remotely against each other on an infinitely large board. One programmer, rather than designing their algorithm’s strategy, let it evolve its own approach. Surprisingly, the algorithm suddenly began winning all its games. It turned out that the algorithm’s strategy was to place its move very, very far away, so that when its opponent’s computer tried to simulate the new greatly-expanded board, the huge gameboard would cause it to run out of memory and crash, forfeiting the game.
In conclusion
When machine learning solves problems, it can come up with solutions that range from clever to downright uncanny.
Biological evolution works this way, too – as any biologist will tell you, living organisms find the strangest solutions to problems, and the strangest energy sources to exploit. Sometimes I think the surest sign that we’re not living in a computer simulation is that if we were, some microbe would have learned to exploit its flaws.
So as programmers we have to be very very careful that our algorithms are solving the problems that we meant for them to solve, not exploiting shortcuts. If there’s another, easier route toward solving a given problem, machine learning will likely find it.
Fortunately for us, “kill all humans” is really really hard. If “bake an unbelievably delicious cake” also solves the problem and is easier than “kill all humans”, then machine learning will go with cake.
“Sometimes I think the surest sign that we’re not living in a computer simulation is that if we were, some microbe would have learned to exploit its flaws.”