The coincidental emergence of the #MeToo hashtag and self-training, self-replicating AI systems got me thinking. How will a self-training AI system be biased when learning from the #MeToo hashtagged posts? And how would the advent of self-training AI affect the systems that control our news feeds and other curated content presented to us?
Congress wants answers. It’s caveat emptor if someone boosts (pays to advertise) a Facebook post about a new fruit smoothie that prevents cancer, heart disease, and warts. But suggest to a professional politician that the same exact Facebook advertising might adversely affect that politician’s ability to get reelected, and it’s time for a congressional hearing.
Professional negotiators often wax poetic about win-win outcomes: where both sides cooperate and compromise. In practice, win-win is never a dominant strategy. Lose-lose almost always beats it. Here’s why.
What, exactly, did Facebook, Twitter, Google, and other tech giants do to empower or enable bad actors (foreign governments, radical organizations, Russians) to influence the outcome of the 2016 elections? How did it happen? Who is to blame? How can we prevent it from happening again?
Facebook is under scrutiny for (among other things) allegedly selling political ads to the Russians, allowing people to set up fake accounts, and not properly monitoring the content posted by “fake” profiles. Wait. What?