Добавить новость
Январь 2010 Февраль 2010 Март 2010 Апрель 2010 Май 2010
Июнь 2010
Июль 2010 Август 2010
Сентябрь 2010
Октябрь 2010
Ноябрь 2010
Декабрь 2010
Январь 2011
Февраль 2011 Март 2011 Апрель 2011 Май 2011 Июнь 2011 Июль 2011 Август 2011
Сентябрь 2011
Октябрь 2011 Ноябрь 2011 Декабрь 2011 Январь 2012 Февраль 2012 Март 2012 Апрель 2012 Май 2012 Июнь 2012 Июль 2012 Август 2012 Сентябрь 2012 Октябрь 2012 Ноябрь 2012 Декабрь 2012 Январь 2013 Февраль 2013 Март 2013 Апрель 2013 Май 2013 Июнь 2013 Июль 2013 Август 2013 Сентябрь 2013 Октябрь 2013 Ноябрь 2013 Декабрь 2013 Январь 2014 Февраль 2014
Март 2014
Апрель 2014 Май 2014 Июнь 2014 Июль 2014 Август 2014 Сентябрь 2014 Октябрь 2014 Ноябрь 2014 Декабрь 2014 Январь 2015 Февраль 2015 Март 2015 Апрель 2015 Май 2015 Июнь 2015 Июль 2015 Август 2015 Сентябрь 2015 Октябрь 2015 Ноябрь 2015 Декабрь 2015 Январь 2016 Февраль 2016 Март 2016 Апрель 2016 Май 2016 Июнь 2016 Июль 2016 Август 2016 Сентябрь 2016 Октябрь 2016 Ноябрь 2016 Декабрь 2016 Январь 2017 Февраль 2017 Март 2017 Апрель 2017 Май 2017
Июнь 2017
Июль 2017
Август 2017 Сентябрь 2017 Октябрь 2017 Ноябрь 2017 Декабрь 2017 Январь 2018 Февраль 2018 Март 2018 Апрель 2018 Май 2018 Июнь 2018 Июль 2018 Август 2018 Сентябрь 2018 Октябрь 2018 Ноябрь 2018 Декабрь 2018 Январь 2019
Февраль 2019
Март 2019 Апрель 2019 Май 2019 Июнь 2019 Июль 2019 Август 2019 Сентябрь 2019 Октябрь 2019 Ноябрь 2019 Декабрь 2019 Январь 2020
Февраль 2020
Март 2020 Апрель 2020 Май 2020 Июнь 2020 Июль 2020 Август 2020 Сентябрь 2020 Октябрь 2020 Ноябрь 2020 Декабрь 2020 Январь 2021 Февраль 2021 Март 2021 Апрель 2021 Май 2021 Июнь 2021 Июль 2021 Август 2021 Сентябрь 2021 Октябрь 2021 Ноябрь 2021 Декабрь 2021 Январь 2022 Февраль 2022 Март 2022 Апрель 2022 Май 2022 Июнь 2022 Июль 2022 Август 2022 Сентябрь 2022 Октябрь 2022 Ноябрь 2022 Декабрь 2022 Январь 2023 Февраль 2023 Март 2023 Апрель 2023 Май 2023 Июнь 2023 Июль 2023 Август 2023 Сентябрь 2023 Октябрь 2023 Ноябрь 2023 Декабрь 2023 Январь 2024 Февраль 2024 Март 2024 Апрель 2024 Май 2024 Июнь 2024 Июль 2024 Август 2024 Сентябрь 2024 Октябрь 2024 Ноябрь 2024 Декабрь 2024 Январь 2025 Февраль 2025 Март 2025 Апрель 2025
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29
30
Game News |

The 2012 source code for AlexNet, the precursor to modern AI, is now on Github thanks to Google and the Computer History Museum

AI is one of the biggest and most all-consuming zeitgeists I've ever seen in technology. I can't even search the internet without being served several ads about potential AI products, including the one that's still begging for permissions to run my devices. AI may be everywhere we look in 2025, but the kind of neural networks now associated with it are a bit older. This kind of AI was actually being dabbled with as far back as the 1950's, though it wasn't until 2012 that we saw it kick off the current generation of machine learning with AlexNet; an image recognition bot whose code has just been released as open source by Google and the Computer History Museum.

We've seen many different ideas of AI over the years, but generally the term is used in reference to computers or machines with self learning capabilities. While the concept has been talked about by science-fiction writers since the 1800's, it's far from being fully realised. Today most of what we call AI refers to language models and machine learning, as opposed to unique individual thought or reasoning by a machine. This kind of deep learning technique is essentially feeding computers large sets of data to train them on specific tasks.

The idea of deep learning also isn't new. In the 1950's researchers like Frank Rosenblatt at Cornell had already created a simplified machine learning neural network using similar foundational ideas to what we have today. Unfortunately the technology hadn't quite caught up to the idea, and was largely rejected. It wasn't until the 1980's that we really saw machine learning come up once again.

In 1986, Geoffrey Hinton, David Rumelhart and Ronald J. Williams, published a paper around backpropagation, an algorithm that applies appropriate weights to the responses of a neural network based on the cost. They weren't the first to raise the idea, but rather the first that managed to popularise it. Backpropagation as an idea for machine learning was raised by several including Frank Rosenblatt as early as the '60s but couldn't really be implemented. Many also credit it as a machine learning implementation of the chain rule, for which the earliest written attribution is to Gottfried Wilhelm Leibniz in 1676.

Despite promising results, the technology wasn't quite up to the speed required to make this kind of deep learning viable. To bring AI up to the level we see today we needed a heap more data to train them on, and much higher level computational power in order to achieve this.

In 2006 professor Fei-Fei Li at Stanford University began building ImageNet. Li envisioned a database that held an image for every English noun, so she and her students began collecting and categorising photographs. They used WordNet, an established collection of words and relationships to identify the images. The task was so huge it was eventually outsourced to freelancers until it was realised as by far the largest dataset of its kind in 2009.

It was around the same time Nvidia was working on the CUDA programming system for its GPUs. This is the company which just went hard on AI at 2025's GTC, and is even using the tech to help people learn sign language. With CUDA, these powerful compute chips could be far more easily programmed to tackle things other than just visual graphics. This allowed researchers to start implementing neural networks in areas like speech recognition, and actually see success.

In 2011 two such students under Goeffrey Hinton, Ilya Sutskever (who went on to co-found OpenAI) and Alex Krizhevsky began work on what would become AlexNet. Sutskever saw the potential from their previous work, and convinced his peer Krizhevsky to use his mastery of GPU squeezing to train this neural network, while Hinton acted as principal investigator. Over the next year Krizhevsky trained, tweaked, and retrained the system on a single computer using two Nvidia GPUs with his own CUDA code. In 2012 the three released a paper which Hinton also presented at a computer vision conference in Florence.

Hinton summarised the experience to CHM as “Ilya thought we should do it, Alex made it work, and I got the Nobel Prize.”

It didn't make much noise at the time, but AlexNet completely changed the direction of modern AI. Before AlexNet, neural networks weren't commonplace in these developments. Now, they're the framework for most anything touting the name AI, from robot dogs with nervous systems to miracle working headsets. As computers get more powerful we're only set to see even more of it.

Given how huge AlexNet has been for AI, CHM releasing the source code is not only a wonderful nod, but also quite prudent in making sure this information is freely available to all. To ensure it was done fairly, correctly—and above all legally—CHM reached out to AlexNet's namesake, Alex Krizhevsky, who put them in touch with Hinton who was working with Google after being acquired. Now, considered one of the fathers of machine learning, Hinton was able to connect CHM to the right team at Google who began a five-year negotiation process before release

This may mean the code, available to all on Github might be a somewhat sanitised version of AlexNet, but it's also the correct one. There are several with similar or even the same name around, but they're likely to be homages or interpretations. This upload is described as the "AlexNet source code as it was in 2012" so it should serve as an interesting marker along the pathway to AI, and whatever form it learns to take in the future.


Best CPU for gaming: Top chips from Intel and AMD.
Best gaming motherboard: The right boards.
Best graphics card: Your perfect pixel-pusher awaits.
Best SSD for gaming: Get into the game first.



Читайте также

Today's Wordle answer for Tuesday, April 29

I hope Dune: Awakening's base-building get weirder and wilder than what we saw in the beta, because it's going up against decades of killer Dune artwork

Гайд по классам в Mabinogi Mobile: таланты, лучшие классы и смена роли




Game24.pro — паблик игровых новостей в календарном формате на основе технологичной новостной информационно-поисковой системы с элементами искусственного интеллекта, гео-отбора и возможностью мгновенной публикации авторского контента в режиме Free Public. Game24.pro — ваши Game News сегодня и сейчас в Вашем городе.

Опубликовать свою новость, реплику, комментарий, анонс и т.д. можно мгновенно — здесь.


Персональные новости

Тренд 100 тысяч шагов: врач предупреждает об опасности для жизни

Суд оставил блогера Лерчек и ее бывшего мужа под домашним арестом

ЦСКА пробился в полуфинал, обыграв "Пари Нижний Новгород" в серии 3-0

Россия в 2024 году осталась четвертой экономикой мира