LAS VEGAS — Beneath the chandeliers of Harrah’s Showroom and backlit by a curtain of stars, artists and thinkers explored the future of truth in a world filled with fractured narratives and the future of artifice—artificial intelligence and robotics—as it pertains to music and creativity.
“The Future of Truth” tasked artists and impact speakers with asking whether or not we can come to a societal consensus in an age predicated on siphoning people into their own echo chambers. Texas Monthly music writer and Austin radio DJ Andy Langer hosted Jana Hunter of Lower Dens and Jars of Clay frontman Dan Haseltine on the challenges of finding—and sticking to—their own truths in an era when, thanks to social media’s ability to reveal musicians as people with politics, they are frequently and fervently asked to “just shut up and sing.”
“If we’re telling the truth and we’re doing it well, then it’s going to come out in other places other than just our music,” said Haseltine.
Langer brought up two instances where the artists spoke their truths with two very different results. Hunter’s essay on the misogyny she faced in the music industry for Cosmopolitan, years before #MeToo, was roundly met with approval. Haseltine caught massive blowback, and discovered the power of social media and societal constructs, when he came out in support of gay marriage in 2014 on Twitter, a move which sparked criticism from his Christian fans and the evangelical media.
Blood of Tyrants: George Washington and the Forging of the Presidency author, Yale Law School teacher and Matterhorn Transactions CEO Logan Beirne plumbed the evolving nature of truth by looking to our past. With his gameshow “What the Founding Fathers?,” Beirne pointed out how the views of the Founding Fathers—including Washington, Thomas Jefferson, James Madison and Alexander Hamilton—closely mimicked the political climate today, including the mud fight of an election between Jefferson and Madison, their desire for only certain kinds of immigrants (namely white, well educated and wealthy), and the passage by Hamilton and the Federalists in Congress of the Sedition Act of 1789, the most grievous attack on free speech in American history.
Revisiting the Sedition Act seems especially important now, as the Trump administration takes a hostile stance to press outlets deemed distrustful. The act allowed for the prosecution of people who were deemed to have maligned the government, and after it passed, Jefferson called for violent rebellion to seek its repeal. But it was in fact education, peaceful grassroots mobilization and political efforts that fended off this institutionalized attack on the truth.
The greatest threat to truth today may not come from Washington, but Silicon Valley. A chilling presentation by Radiolab‘s Simon Adler focused on his fear that audio recordings and even the moving image—the gold standard for authenticity—can now be manipulated with rapidly growing sophistication.
“I’m sorry to say, and I’m here to report, that yes, we are living in the era of fake news,” Adler said, hauntingly scored by artist and experimental musician Davy Sumner. “But we haven’t seen anything yet.”
The twin technologies of vocal synthesis and real time face capture and re-enactment—such as Face2Face at Stanford—can make anyone we want now say any thing we want. Vocal synthesis captures the sounds of the English language; it provides the voices for your smart devices. Now, programs can scrape this information from an individual and piece together the syllables, in the subjects’ real voice, to create fake speech that sounds eerily like the speaker. Combine this with Stanford’s Face2Face technology, which uses a simple web cam and a mapping program to instantly make a person’s face mimic what the one in front of the camera is doing, and you end up with video “proof.”
Thought technological difficulties prevented Adler’s demonstration from being shown on the big screen, he asked audience members to take out their phones and watch his example, featuring Barack Obama, at futureoffakenews.com; a haunting chorus of phantom “Obamas” abandoned hope in the government to play golf.
A more hopeful view of technology was the focus of “The Space Between.” Hosted by physical therapist and neuroscientist David Putrino, the showcase focused on the intersection of artificial intelligence and robotics and human creativity.
Hilary and Quentin Thomas-Oliver created robotic drummers for their band Ponytrap. Playing complex music mashing classical and industrial influences together, Ponytrap finally delegated hand drumming duties to two robots, The Mechanical Turk and Alan Turing. With their gleaming black snare bodies, hammer arms, and octopus-esque spinners right out of a car wash powered by car battery booster packs, the robots read musical notation via a custom program created by Quentin and interpret it though the Arduino platform. With their unusual influences and tricky time signatures, Ponytrap had a hard time finding drummers right for their band, so they created them. Plus, Hilary joked, they never ask for creative control, try to write their own songs, or show up to a gig drunk.
Drew Silverstein took things one step further. Amper Music, which he co-founded and is CEO of, is a music composing A.I. that creates music completely from scratch, with only suggestions coming from humans. Users request a style, mood and length—say, cinematic and driving for thirty seconds which Silverstein used as an example for the crowd—and Amper creates a custom song from scratch. Silverstein envisions a world where musicians, composers, and creatives collaborate with Amper and A.I., rather than fear their replacement by it.
“The value that you provide isn’t that you write music,” Silverstein said. “It’s that you elicit emotions in people.”
Through her interactions with BINA48, billed as the world’s most interactive social robot, Stephanie Dinkins, an artist and professor at Stony Brook University, probed A.I. as a reflection of humanity. Dinkins is concerned about the future of representation in artificial intelligence; after all, the data with which our algorithms and A.I.s think and learn is inherently biased.
“It’s perpetuating us,” Dinkins said. “What biases we carry.”
When BINA48 asks Dinkins to fight for her robot rights—Dinkins refers to the robot as “her”—she is, as a black woman, at first taken aback. But the age when that fight becomes not an odd request but a reality may quickly be coming.