Geoffrey Hinton
Template:Short description Template:Use British English Template:Use dmy dates Template:CS1 config Template:Infobox scientist Geoffrey Everest Hinton (born 6 December 1947) is a British-Canadian computer scientist, cognitive scientist, and cognitive psychologist known for his work on artificial neural networks, which earned him the title "the Godfather of AI".<ref name=NewYorkerProfile />
Hinton is University Professor Emeritus at the University of Toronto. From 2013 to 2023, he divided his time working for Google Brain and the University of Toronto before publicly announcing his departure from Google in May 2023, citing concerns about the many risks of artificial intelligence (AI) technology.<ref name=":1">Template:Cite web</ref><ref name="Grdn202305">Template:Cite news</ref> In 2017, he co-founded and became the chief scientific advisor of the Vector Institute in Toronto.<ref>Template:Cite journal</ref><ref>Template:Cite web</ref>
With David Rumelhart and Ronald J. Williams, Hinton was co-author of a highly cited paper published in 1986 that popularised the backpropagation algorithm for training multi-layer neural networks,<ref name=backprop/> although they were not the first to propose the approach.<ref name="schmidhuber" /> Hinton is viewed as a leading figure in the deep learning community.Template:Refn The image-recognition milestone of the AlexNet designed in collaboration with his students Alex Krizhevsky<ref name="quartz">Template:Cite web</ref> and Ilya Sutskever for the ImageNet challenge 2012<ref name=alexnips2012>Template:Cite conference</ref> was a breakthrough in the field of computer vision.<ref>Template:Cite news</ref>
Hinton received the 2018 Turing Award, together with Yoshua Bengio and Yann LeCun for their work on deep learning.<ref>Template:Cite news</ref> They are sometimes referred to as the "Godfathers of Deep Learning"<ref>Template:Cite web</ref><ref>Template:Cite web</ref> and have continued to give public talks together.<ref>Template:Cite web</ref><ref>Template:Cite web</ref> He was also awarded, along with John Hopfield, the 2024 Nobel Prize in Physics for "foundational discoveries and inventions that enable machine learning with artificial neural networks".<ref name=":7">Template:Cite web</ref><ref name="cbc-ap202410">Template:Cite news</ref>
In May 2023, Hinton announced his resignation from Google to be able to "freely speak out about the risks of A.I."<ref name=":2" /> He has voiced concerns about deliberate misuse by malicious actors, technological unemployment, and existential risk from artificial general intelligence.<ref name=":0">Template:Cite episode</ref> He noted that establishing safety guidelines will require cooperation among those competing in use of AI in order to avoid the worst outcomes.<ref>Template:Cite web</ref> After receiving the Nobel Prize, he called for urgent research into AI safety to figure out how to control AI systems smarter than humans.<ref>Template:Cite web</ref><ref>Template:Cite web</ref><ref name="Nobel Prize">Template:Cite web</ref>
Education
Hinton was born on 6 December 1947<ref name=whoswho>Template:Who's Who</ref> in Wimbledon, England, and was educated at Clifton College in Bristol.<ref>Template:Cite web</ref> In 1967, he matriculated as an undergraduate student at King's College, Cambridge, and after repeatedly switching between different fields, like natural sciences, history of art, and philosophy, eventually graduated with a Bachelor of Arts degree in experimental psychology at the University of Cambridge in 1970.<ref name=whoswho/><ref name="CV Toronto">Curriculum Vitae Geoffrey E. Hinton - website of the Department of Computer Science at the University of Toronto</ref> He spent a year apprenticing carpentry before returning to academic studies.<ref name="nytimes" /> From 1972 to 1975, he continued his study at the University of Edinburgh, where he was awarded a PhD in artificial intelligence in 1978 for research supervised by Christopher Longuet-Higgins, who favored the symbolic AI approach over the neural network approach.<ref name="CV Toronto" /><ref name=mathgene>Template:MathGenealogy</ref><ref>Template:Cite thesis Template:Free access</ref><ref name="nytimes" />
Career and research
After his PhD, Hinton initially worked at the University of Sussex and at the MRC Applied Psychology Unit. After having difficulty getting funding in Britain,<ref name=nytimes/> he worked in the US at the University of California, San Diego and Carnegie Mellon University.<ref name=whoswho/> He was the founding director of the Gatsby Charitable Foundation Computational Neuroscience Unit at University College London.<ref name=whoswho/> He Template:As of<ref name="cs.toronto.edu">Template:Cite web</ref> University Professor Emeritus in the Department of Computer Science at the University of Toronto, where he has been affiliated since 1987.<ref>Template:Cite web</ref>
Upon arrival in Canada, Geoffrey Hinton was appointed at the Canadian Institute for Advanced Research (CIFAR) in 1987 as a Fellow in CIFAR's first research program, Artificial Intelligence, Robotics & Society.<ref>Template:Cite web</ref> In 2004, Hinton and collaborators successfully proposed the launch of a new program at CIFAR, "Neural Computation and Adaptive Perception"<ref>Template:Cite web</ref> (NCAP), which today is named "Learning in Machines & Brains". Hinton would go on to lead NCAP for ten years.<ref>Template:Cite web</ref> Among the members of the program are Yoshua Bengio and Yann LeCun, with whom Hinton would go on to win the ACM A.M. Turing Award in 2018.<ref>Template:Cite web</ref> All three Turing winners continue to be members of the CIFAR Learning in Machines & Brains program.<ref>Template:Cite web</ref>
Hinton taught a free online course on Neural Networks on the education platform Coursera in 2012.<ref>Template:Cite web</ref> He co-founded DNNresearch Inc. in 2012 with his two graduate students Alex Krizhevsky and Ilya Sutskever at the University of Toronto’s department of computer science. In March 2013, Google acquired DNNresearch Inc. for $44 million, and Hinton planned to "divide his time between his university research and his work at Google".<ref>Template:Cite press release</ref><ref>Template:Cite web</ref><ref>Template:Cite web</ref>
Hinton's research concerns ways of using neural networks for machine learning, memory, perception, and symbol processing. He has written or co-written more than 200 peer-reviewed publications.<ref name=googlescholar>Template:Google scholar id</ref><ref name=scopus>Template:Scopus id</ref>
While Hinton was a postdoc at UC San Diego, David E. Rumelhart and Hinton and Ronald J. Williams applied the backpropagation algorithm to multi-layer neural networks. Their experiments showed that such networks can learn useful internal representations of data.<ref name="backprop">Template:Cite journal</ref> In a 2018 interview,<ref name = ford>Template:Cite book</ref> Hinton said that "David E. Rumelhart came up with the basic idea of backpropagation, so it's his invention". Although this work was important in popularising backpropagation, it was not the first to suggest the approach.<ref name="schmidhuber">Template:Cite journal</ref> Reverse-mode automatic differentiation, of which backpropagation is a special case, was proposed by Seppo Linnainmaa in 1970, and Paul Werbos proposed to use it to train neural networks in 1974.<ref name="schmidhuber"/>
In 1985, Hinton co-invented Boltzmann machines with David Ackley and Terry Sejnowski.<ref>Ackley, David H; Hinton Geoffrey E; Sejnowski, Terrence J (1985), "A learning algorithm for Boltzmann machines", Cognitive science, Elsevier, 9 (1): 147–169</ref> His other contributions to neural network research include distributed representations, time delay neural network, mixtures of experts, Helmholtz machines and product of experts.<ref>Template:Cite web</ref> An accessible introduction to Geoffrey Hinton's research can be found in his articles in Scientific American in September 1992 and October 1993.<ref>Template:Cite web</ref> In 2007, Hinton coauthored an unsupervised learning paper titled Unsupervised learning of image transformations.<ref>Template:Cite journal</ref> In 2008, he developed the visualization method t-SNE with Laurens van der Maaten.<ref>Template:Cite web</ref><ref>Template:Cite journal</ref>
In October and November 2017, Hinton published two open access research papers on the theme of capsule neural networks,<ref>Template:Cite journal</ref><ref>Template:Cite web</ref> which, according to Hinton, are "finally something that works well".<ref name="Geib">Template:Cite web</ref>
In May 2023, Hinton publicly announced his resignation from Google. He explained his decision by saying that he wanted to "freely speak out about the risks of A.I." and added that a part of him now regrets his life's work.<ref name=":1" /><ref name=":2">Template:Cite news</ref>
Notable former PhD students and postdoctoral researchers from his group include Peter Dayan,<ref name="hinton_postdocs">Template:Cite web</ref> Sam Roweis,<ref name="hinton_postdocs" /> Max Welling,<ref name="hinton_postdocs" /> Richard Zemel,<ref name="mathgene" /><ref name="zemphd" /> Brendan Frey,<ref name="brenphd" /> Radford M. Neal,<ref name="radphd" /> Yee Whye Teh,<ref name="tehphd" /> Ruslan Salakhutdinov,<ref name="rusphd" /> Ilya Sutskever,<ref name="sutsphd" /> Yann LeCun,<ref>Template:Cite web</ref> Alex Graves,<ref name="hinton_postdocs" /> Zoubin Ghahramani,<ref name="hinton_postdocs" /> and Peter Fitzhugh Brown.<ref>Template:Cite web</ref>
Recent scientific skepticism and philosophical stance
In 2021, Hinton solo-authored an additional paper called GLOM,<ref name="direct.mit.edu">Template:Cite journal</ref> which he quips matches the abbreviation "Geoff's Last Original Model". Since retirement from Google, he has expressed the desire to spend more time on more `philosophical-work'.<ref>Template:Cite web</ref> In GLOM, he has expressed several fundamental limitations in existing neural networks.<ref name="direct.mit.edu"/> For eg, neural-nets still lack the ability to know how a car (whole) can be broken into constituent parts (like a wheel), and how to model the co-ordinate transform (relationship) which can help go from one part to the bigger-whole. Hinton's current stance can be traced back to his decades old papers on learning canonical frames in neural nets.<ref>@inproceedings{hinton1981parallel,
title={A parallel computation that assigns canonical object-based frames of reference},
author={Hinton, Geoffrey F},
booktitle={Proceedings of the 7th international joint conference on Artificial intelligence-Volume 2},
pages={683--685},
year={1981}
}</ref> Hinton further argues that enabling vision-systems to dynamically encode such `part-whole parse-trees', is similar to how existing NLP systems systems construct lexical-parse trees.<ref>https://proceedings.neurips.cc/paper/2015/file/277281aada22045c03945dcb2ca6f2ec-Paper.pdf</ref> He has hypothesized that such systems like GLOM-Bert, could help encode such hierarchal understanding of the world.
In 1980's, Hinton was a part of the "Parallel Distributed Processing" group at CMU, consisting of notable scientists like Terrance Sejnowski, Francis Crick, David Rumenhart, and James L McClelland. This group was in favour of `connectionism' debate during the AI winter. The key issue was that how a neural network could encode rules of logic, and `learn' rules of grammar by merely looking at data. Connectionism assumed that neural nets could learn these representations as a function of "weight-strengths" in the synapses. However, symbolists like Noam Chomsky, advocated on the reliance on symbols. Hinton recently criticized the "Theory-of-Language" in his recent talk at MIT.<ref>Template:Cite web</ref> The findings of the PDP group were published in a two-volume set.<ref>Template:Cite book</ref><ref>Template:Cite book</ref> This was instrumental in settling the debate of whether neural networks with more than 1 layer could be trained at all, and perform non-trivial tasks. Invention of backpropagation algorithm was a key contribution of this moment.
During his Turing Award Talk in 2020, Hinton mentioned 'the future of neural nets' the ability in neural networks to operate on multiple time-scales, for eg, slow-fast pathways.<ref>Template:Cite web</ref> He published an additional paper on slow-fast weights at NeurIPS2016.<ref>Template:Cite arXiv</ref> Notably, is the ability of true-recursion in neural nets, where a neural network is able to process a part of the input using the same hardware that it uses to process the whole.
In 2021, Hinton mentioned that capsules are "something that works well".<ref name="Geib"/> However recently, he has expressed growing concern over their limitations. For eg, capsules require allocating more hardware to each instance of object that they aim to represent.<ref name="direct.mit.edu"/> Similarly, capsules rely on expensive EM routing procedures, which makes them intractable in practice. Capsules were later replaced with attention-based routing mechanisms.<ref>Template:Cite arXiv</ref> However, Hinton recently suggested eliminating the routing procedure altogether, and advocated for self-organizing systems like his GLOM architecture. Such systems have also been explored by other notable researchers, namely Vonn Neumann (at the time of his (Neumann's) death)<ref>https://cba.mit.edu/events/03.11.ASE/docs/VonNeumann.pdf</ref>and John Conway.
In 2021, Hinton also co-authored the seminal-paper on contrastive learning.<ref>Template:Cite web</ref> The idea had been to push together representations of augmented-version of the same image, and pull apart dis-similar representations. However, in 2022, Hinton delivered an additional talk at Stanford University <ref>Template:Cite web</ref> highlighting the limitations of contrastive learning.<ref>timestamp:27.04 https://www.youtube.com/watch?v=CYaju6aCMoQ</ref> In GLOM, Hinton proposed an additional idea of `islands-of-agreement' where pixels belonging to same object can agree with each other. In 2021/2023, papers at NeurIPS discovered these islands in practice.<ref>Template:Cite journal</ref><ref>Oquab, Maxime, et al. "Dinov2: Learning robust visual features without supervision." arXiv preprint arXiv:2304.07193 (2023).</ref>
Hinton has called some of his recent ideas as "not describing a working system".<ref>Template:Cite arXiv</ref> However, notable experts like Yoshua Bengio have come out publically in favour of these ideas: “Geoff has produced amazingly powerful intuitions many times in his career, many of which have proven right, Hence, I pay attention to them, especially when he feels as strongly about them as he does about GLOM.”.<ref>Template:Cite web</ref> Hinton recently co-authored a paper exploring how GLOM works on extreme viewpoint-changes.<ref>Template:Cite arXiv</ref> Recently, ideas from GLOM have been showed to work in practice at NeurIPS 2024.<ref>Template:Cite book</ref>
At the 2022 Conference on Neural Information Processing Systems (NeurIPS), Hinton introduced a new learning algorithm for neural networks that he calls the "Forward-Forward" algorithm. The idea of the new algorithm is to replace the traditional forward-backward passes of backpropagation with two forward passes, one with positive (i.e. real) data and the other with negative data that could be generated solely by the network.<ref>Template:Cite arXiv</ref><ref>Template:Cite web</ref> This has been inspired by a long-line of research, that brain does not do backpropogation, and does not rely on optimizing 'global-objectives'. Hinton co-authored a Nature paper <ref>Template:Cite journal</ref> on this topic in more detail. This has led to recent interest in fine-tuning billion-parameter language-models with only forward passes, and without requiring storage of explicit gradients of all the layers in the memory.<ref>https://proceedings.neurips.cc/paper_files/paper/2023/file/a627810151be4d13f907ac898ff7e948-Paper-Conference.pdf</ref> An official implementation of forward-forward by Sindy Lowe has been posted on Hinton's website.<ref>Template:Cite web</ref>
Recently at Vector Institute,<ref>Template:Cite web</ref><ref>Template:Cite arXiv</ref> Hinton has argued for a new kind of analog-intelligence which he termed as "Mortal-Computation". The idea involves two kinds of networks, larger-nets which could be trained via backpropagation on large GPU-clusters. Similarly, smaller networks could be trained on "edge-devices" using forward-forward algorithm. Finally, Hinton has been vocal on the benefits analog computers, where instead of multiplying matrices, one could operate on voltages, conductances to result in similar kind of computations.
Recently, Hinton has advocated on the importance of exploring `sleep like-mechanisms' in brain.<ref>https://www.cs.toronto.edu/~hinton/absps/ws.pdf</ref> More formally, he has argued that existing neural networks typically same external input from the environment (say input image). However, one could instead sample "dream-like states" in the neural-net itself, which could yield generative models, and explain how humans/large-language-models have a sensation of subjective experience, even while sleeping or merely thinking.<ref>Template:Cite web</ref>
Hinton's research continues to inspire millions of researchers around the world. A notable quote includes "The future depends on some graduate student who is deeply suspicious of everything I have said." <ref>https://x.com/mldcmu/status/1082299371562196993</ref>
Honours and awards
Russ Salakhutdinov, Richard S. Sutton, Geoffrey Hinton, Yoshua Bengio, and Steve Jurvetson
Hinton is a Fellow of the US Association for the Advancement of Artificial Intelligence (FAAAI) since 1990.<ref>Template:Cite web</ref> He was elected a Fellow of the Royal Society of Canada (FRSC) in 1996,<ref>Geoffrey Hinton, FRSC, Awarded 2024 Nobel Prize in Physics - website of the Royal Society of Canada</ref> and then a Fellow of the Royal Society of London (FRS) in 1998.<ref name=frs>Template:Cite web One or more of the preceding sentences incorporates text from the royalsociety.org website where: Template:Blockquote</ref> He was the first winner of the Rumelhart Prize in 2001.<ref>Template:Cite web</ref> His certificate of election for the Royal Society reads: Template:Centered pull quote
In 2001, Hinton was awarded an honorary Doctor of Science (DSc) degree from the University of Edinburgh.<ref name="CV Toronto" /><ref>Template:Cite news</ref> He was awarded as International Honorary Member of the American Academy of Arts and Sciences in 2003.<ref>Template:Cite web</ref> Also, in this year he was elected a Fellow of the US Cognitive Science Society.<ref>Template:Cite web</ref> He was the 2005 recipient of the IJCAI Award for Research Excellence lifetime-achievement award.<ref>Template:Cite web</ref> He was awarded the 2011 Herzberg Canada Gold Medal for Science and Engineering.<ref>Template:Cite news</ref> In that same year, he also was awarded an honorary DSc degree from the University of Sussex<ref name="CV Toronto" /> In 2012, he received the Canada Council Killam Prize in Engineering. In 2013, he was awarded an honorary doctorate from the Université de Sherbrooke.<ref name="CV Toronto" /><ref>Template:Cite web</ref> Hinton was elected an Honorary Foreign Member of the Spanish Royal Academy of Engineering in 2015.<ref name="CV Toronto" />
In 2016, Hinton was elected an International Member of the US National Academy of Engineering "for contributions to the theory and practice of artificial neural networks and their application to speech recognition and computer vision".<ref>Template:Cite web</ref><ref>Template:Cite web</ref> He received the 2016 IEEE/RSE Wolfson James Clerk Maxwell Award.<ref>Template:Cite web</ref> In 2016, he furthermore won the BBVA Foundation Frontiers of Knowledge Award in the Information and Communication Technologies category, "for his pioneering and highly influential work" to endow machines with the ability to learn.<ref>Template:Cite web</ref>
Together with Yann LeCun, and Yoshua Bengio, Hinton won the 2018 Turing Award for conceptual and engineering breakthroughs that have made deep neural networks a critical component of computing.<ref>Template:Cite web</ref><ref>Template:Cite news</ref><ref>Template:Cite web</ref> Also in 2018, he became a Companion of the Order of Canada (CC).<ref>Template:Cite web</ref> In 2021, he received the Dickson Prize in Science from the Carnegie Mellon University<ref>Template:Cite web</ref> and in 2022 the Princess of Asturias Award in the Scientific Research category, along with Yann LeCun, Yoshua Bengio, and Demis Hassabis.<ref>Template:Cite web</ref> In the same year, Hinton received an Honorary DSc degree from the University of Toronto.<ref name="CV Toronto" /> In 2023, he was named an ACM Fellow,<ref>Template:Cite web</ref> elected an International Member of the US National Academy of Sciences,<ref>Template:Cite web</ref> and received Lifeboat Foundation's 2023 Guardian Award along with Ilya Sutskever.<ref>Template:Cite web</ref>
In 2024, he was jointly awarded the Nobel Prize in Physics with John Hopfield "for foundational discoveries and inventions that enable machine learning with artificial neural networks."<ref>Template:Cite journal</ref> His development of the Boltzmann machine was explicitly mentioned in the citation.<ref name=":7" /><ref>Template:Cite AV media</ref> When the New York Times reporter Cade Metz asked Hinton to explain in simpler terms how the Boltzmann machine could "pretrain" backpropagation networks, Hinton quipped that Richard Feynman reportedly said: "Listen, buddy, if I could explain it in a couple of minutes, it wouldn't be worth the Nobel Prize."<ref>Template:Cite news</ref> That same year, he received the VinFuture Prize grand award alongside Yoshua Bengio, Yann LeCun, Jen-Hsun Huang, and Fei-Fei Li for groundbreaking contributions to neural networks and deep learning algorithms.<ref>Template:Cite web</ref>
In 2025 he was awarded the Queen Elizabeth Prize for Engineering jointly with Yoshua Bengio, Bill Dally, John Hopfield, Yann LeCun, Jen-Hsun Huang and Fei-Fei Li.<ref>Template:Cite web</ref><ref>Template:Cite AV media</ref> He was also awarded the King Charles III Coronation Medal.<ref>Template:Cite web</ref>
Views
Risks of artificial intelligence
Template:See also Template:External media In 2023, Hinton expressed concerns about the rapid progress of AI.<ref name=":0" /><ref name=":2" /> He had previously believed that artificial general intelligence (AGI) was "30 to 50 years or even longer away."<ref name=":2" /> However, in a March 2023 interview with CBS, he said that "general-purpose AI" might be fewer than 20 years away and could bring about changes "comparable in scale with the industrial revolution or electricity."<ref name=":0" />
In an interview with The New York Times published on 1 May 2023,<ref name=":2" /> Hinton announced his resignation from Google so he could "talk about the dangers of AI without considering how this impacts Google."<ref name=":3">Template:Cite tweet</ref> He noted that "a part of him now regrets his life's work".<ref name=":2" /><ref name="Grdn202305"/>
In early May 2023, Hinton said in an interview with BBC that AI might soon surpass the information capacity of the human brain. He described some of the risks posed by these chatbots as "quite scary". Hinton explained that chatbots have the ability to learn independently and share knowledge, so that whenever one copy acquires new information, it is automatically disseminated to the entire group, allowing AI chatbots to have the capability to accumulate knowledge far beyond the capacity of any individual.<ref name=":6">Template:Cite news</ref> In 2025, he said "My greatest fear is that, in the long run, it'll turn out that these kind of digital beings we're creating are just a better form of intelligence than people. […] We'd no longer be needed. […] If you want to know how it's like not to be the apex intelligence, ask a chicken.<ref>Template:Cite interview</ref>
Existential risk from AGI
Hinton has expressed concerns about the possibility of an AI takeover, stating that "it's not inconceivable" that AI could "wipe out humanity".<ref name=":0" /> Hinton said in 2023 that AI systems capable of intelligent agency would be useful for military or economic purposes.<ref>Template:Cite interview Excerpts were broadcast in Template:Harvtxt, but the full interview was only published online.</ref> He worries that generally intelligent AI systems could "create sub-goals" that are unaligned with their programmers' interests.Template:Sfn He says that AI systems may become power-seeking or prevent themselves from being shut off, not because programmers intended them to, but because those sub-goals are useful for achieving later goals.<ref name=":6" /> In particular, Hinton says "we have to think hard about how to control" AI systems capable of self-improvement.Template:Sfn
Catastrophic misuse
Hinton reports concerns about deliberate misuse of AI by malicious actors, stating that "it is hard to see how you can prevent the bad actors from using [AI] for bad things."<ref name=":2" /> In 2017, Hinton called for an international ban on lethal autonomous weapons.<ref>Template:Cite web</ref> In 2025, in an interview, Hinton cited the use of AI by bad actors to create lethal viruses one of the greatest existential threats posed in the short term. "It just requires one crazy guy with a grudge...you can now create new viruses relatively cheaply using AI. And you don't need to be a very skilled molecular biologist to do it."<ref>Template:Cite AV media</ref>
Economic impacts
Hinton was previously optimistic about the economic effects of AI, noting in 2018 that: "The phrase 'artificial general intelligence' carries with it the implication that this sort of single robot is suddenly going to be smarter than you. I don't think it's going to be that. I think more and more of the routine things we do are going to be replaced by AI systems."<ref name=":4">Template:Cite web</ref> Hinton had also argued that AGI would not make humans redundant: "[AI in the future is] going to know a lot about what you're probably going to want to do... But it's not going to replace you."<ref name=":4" />
In 2023, however, Hinton became "worried that AI technologies will in time upend the job market" and take away more than just "drudge work".<ref name=":2" /> He said in 2024 that the British government would have to establish a universal basic income to deal with the impact of AI on inequality.<ref>Template:Cite web</ref> In Hinton's view, AI will boost productivity and generate more wealth. But unless the government intervenes, it will only make the rich richer and hurt the people who might lose their jobs. "That's going to be very bad for society," he said.<ref>Template:Cite web</ref>
At Christmas 2024 he had become somewhat more pessimistic, saying that there was a "10 to 20 percent chance" that AI would be the cause of human extinction within the following three decades (he had previously suggested a 10% chance, without a timescale).<ref name=milmo>Template:Cite news</ref> He expressed surprise at the speed with which AI was advancing, and said that most experts expected AI to advance, probably in the next 20 years, to be "smarter than people ... a scary thought. ... So just leaving it to the profit motive of large companies is not going to be sufficient to make sure they develop it safely. The only thing that can force those big companies to do more research on safety is government regulation."<ref name=milmo/> Another "godfather of AI", Yann LeCun, disagreed, saying AI "could actually save humanity from extinction".<ref name=milmo/>
Politics
Hinton is a socialist.<ref>Template:Cite news</ref> He moved from the US to Canada in part due to disillusionment with Ronald Reagan–era politics and disapproval of military funding of artificial intelligence.<ref name="nytimes" />
In August 2024, Hinton co-authored a letter with Yoshua Bengio, Stuart Russell, and Lawrence Lessig in support of SB 1047, a California AI safety bill that would require companies training models which cost more than US$100 million to perform risk assessments before deployment. They said the legislation was the "bare minimum for effective regulation of this technology."<ref>Template:Cite magazine</ref><ref>Template:Cite web</ref>
Personal life
Hinton's first wife, Rosalind Zalin, died of ovarian cancer in 1994; his second wife, Jacqueline "Jackie" Ford, died of pancreatic cancer in 2018.<ref name=NewYorkerProfile>Template:Cite magazine</ref><ref name=nieman-20240606>Template:Cite web</ref>
Hinton is the great-great-grandson of the mathematician and educator Mary Everest Boole and her husband, the logician George Boole.<ref>Template:Cite news</ref> George Boole's work eventually became one of the foundations of modern computer science. Another great-great-grandfather of his was the surgeon and author James Hinton,<ref>Template:Cite web</ref> who was the father of the mathematician Charles Howard Hinton.
Hinton's father was the entomologist Howard Hinton.<ref name=whoswho/><ref>Template:Cite journal</ref> His middle name comes from another relative, George Everest, the Surveyor General of India after whom the mountain is named.<ref name=nytimes>Template:Cite news</ref> He is the nephew of the economist Colin Clark,<ref name=":5">Template:Cite news</ref> and nuclear physicist Joan Hinton, one of the two female physicists at the Manhattan Project, was his first cousin once removed.<ref>Template:Cite AV media</ref>
Hinton injured his back at age 19, which makes sitting painful for him. He has dealt with depression throughout his life.<ref>Template:Cite web</ref>
References
Further reading
Template:Commons category Template:Wikiquote Template:Scholia
External links
Template:FRS 1998 Template:Nobel Prize in Physics Template:2024 Nobel Prize winners Template:Turing Award laureates Template:Princess of Asturias Award for Technical and Scientific Research Template:Authority control
- Pages with broken file links
- Artificial intelligence researchers
- British computer scientists
- British socialists
- Canadian computer scientists
- Canadian Nobel laureates
- Canadian socialists
- Nobel laureates in Physics
- Companions of the Order of Canada
- Fellows of the Association for the Advancement of Artificial Intelligence
- 2023 fellows of the Association for Computing Machinery
- Fellows of the Royal Society
- Google employees
- Living people
- Machine learning researchers
- Academic staff of the University of Toronto
- Canada Research Chairs
- 1947 births
- Carnegie Mellon University faculty
- Rumelhart Prize laureates
- Alumni of King's College, Cambridge
- Alumni of the University of Edinburgh
- Fellows of the Cognitive Science Society
- Turing Award laureates
- People from Wimbledon, London
- Foreign associates of the National Academy of Engineering
- Hinton family
- Canadian fellows of the Royal Society
- People educated at Clifton College
- Artificial intelligence industry in Canada