<http://nautil.us/issue/74/networks/who-will-design-the-future> Who Will Design the Future? AI will be staggeringly diverse. Its developers should be, too. [...] We all have a role to play in building AI and ensuring that this revolutionary technology is used for the benefit of all. What kind of skills and intelligence will be required to build our best technological future? How will we avoid the pitfalls of homogeneity? Almost all of the major advances in AI development are currently being made in silos, disparate laboratories, secret government facilities, elite academic institutions, and the offices of very large companies working independently throughout the world. Few private companies (as of this writing) are actively sharing their work with competitors, despite the efforts of such organizations as OpenAI, the MIT-IBM Watson AI Lab, and the Future of Life Institute to bring awareness to the importance of transparency in building AI. Keeping intellectual property secret is deeply ingrained in the culture of private enterprise, but with the acceleration of AI technology development and proliferation, our public duty to one another is such that we have to prioritize transparency, accountability, fairness, and ethical decision-making. The people working in the various fields of AI are presently doing so with little or no oversight outside of a few self-imposed ethical guidelines. They have no consistent set of laws or regulations to guide them, in general or within industries. The intelligent machine gold rush is still in its Wild West phase, and there are huge financial rewards on the line. Many believe that the first trillionaire will be an AI entrepreneur. Concentrating AI talent in a very small and secretive group of organizations sets a dangerous precedent that can inhibit democratization of the technology. While the rewards of inventing the next generation of smart tech are undoubtedly attracting the best and the brightest from around the world—and there is currently a substantial demand for AI experts—for the most part they constitute a homogeneous group of people. Many of AI’s foundational concepts were created by an even less diverse set of people. The building blocks of AI are incredibly eclectic, in that they draw from such distinct fields as psychology, neuroscience, biomimicry, and computer science, yet the demographic of AI’s developers does not reflect this diversity. Researcher Timnit Gebru was at the Neural Information Processing Systems conference in 2016, with approximately 8,500 people in attendance. She counted six black people among them, and herself the only black woman. If the players are all very similar, the game is already stacked. The teams designing smart technology represent some of the most astute computer scientists working today, and they have made and will continue to make extraordinary contributions to science. However, these brilliant people, for the most part—except for some of those writing on the subject, and some of the coalitions calling for more transparency in AI research—are working in isolation. The result is a silo effect. To avoid the most harmful repercussions of the silo effect, we need to be having a broader discussion about the homogeneity of the people involved in artificial intelligence development. Although many of the current leaders in the AI field have been trained at the most prestigious schools and have earned advanced degrees, most have received virtually no training in the ethical ramifications of creating intelligent machines, largely because such training has not historically been a standard expectation for specialists in the field. While some pilot programs are underway, including a new AI college at MIT, courses on ethics, values, and human rights are not yet integral parts of the computer and engineering sciences curriculum. They must be. The current educational focus on specialized skills and training in a field such as computer science can also discourage people from looking beyond the labs and organizations in which they already exist. In the next generation of AI education, we will need to guard against such overspecialization. It is crucial that we institute these pedagogical changes at every level, including for the very youngest future scientists and policy-makers. According to Area9 cofounder Ulrik Juul Christensen, “discussion is rapidly moving to the K-12 education system, where the next generation must prepare for a world in which advanced technology such as artificial intelligence and robotics will be the norm and not the novelty.” Some of the biggest players presently in the AI game are the giant technology companies, such as Google, Facebook, Microsoft, Baidu, Alibaba, Apple, Amazon, Tesla, IBM (which built Watson), and DeepMind (which made AlphaGo and was acquired by Google). These companies swallow up the smaller AI companies at a rapid rate. This consolidation of technological knowledge within a few elite for-profit companies is ascendant and will continue to rise due to conventional power dynamics. We will need, among many other societal changes, incentives to encourage entrepreneurship that can spawn smaller, more agile, and more diverse companies in this space. Given the economic trends toward tech monopolies and against government intervention in corporate power consolidation, we have to counter not only by investing in creative AI start-ups, but also by educating the public on how important it is to infuse transparency, teamwork, and inclusive thinking into the development of AI. The demand for the most accomplished people working in AI and related fields is fierce, and so these relatively small numbers of corporations that control enormous resources are thus able to offer significant compensation. Even such elite universities as Oxford and Cambridge are complaining that tech giants are stealing all of their talent. On the federal level, the U.S. Department of Defense’s Defense Advanced Research Projects Agency is readying AI for the government’s military use. Governments and tech giants from Russia to China are hard at work in a competition to build the most robust intelligent technology. While each nation is covert about its process, sources indicate that China and Russia are outpacing the United States in AI development in what is being called the next space race. Our chance to include diverse voices has a limited horizon. Concentrating AI talent in a very small and secretive group of organizations sets a dangerous precedent that can inhibit democratization of the technology. It also means that less rigorous academic research is being conducted and published than could be achieved if ideas were shared more freely. With a primarily capitalistic focus on growth, expansion, and profit, the pendulum of public discourse swings away from a deeper understanding of the philosophical and human repercussions of building these tools—topics that researchers and those outside these siloed environments are freer to debate in academic institutions. To better manage the looming menaces posed by developing smart technologies, let’s invite the largest possible spectrum of thought into the room. This commitment must go beyond having diverse voices, though that is a critical starting point. To collaborate effectively (and for good), we must move toward collective intelligence that harnesses various skills, backgrounds, and resources to gather not only smart individuals but also smart teams. Thomas W. Malone, in his book Superminds, reminds us that our collective intelligence—not the genius of isolated individuals—is responsible for almost all human achievement in business, government, science, and beyond. And with intelligent tech, we are all about to get a lot smarter. Harnessing our collective ingenuity can help us move past complacency and realize our best future. The company Unanimous AI uses swarm intelligence technology (sort of like a hive mind) inspired by swarms in nature in order to amplify human wisdom, knowledge, and intuition, optimizing group dynamics to enhance decision-making. Another idea will be to consider more open-source algorithms to better support algorithmic transparency and information sharing. Open-source allows developers to access the public work of others and build upon it. More aspirationally, we must endeavor to design a moral compass with a broad group of contributors and apply it throughout the entire AI ecosystem. [...]