The classification theorem for finite simple groups might be a good candidate though. The classification theorem takes up tens of thousands of journal pages, and mostly between the 1950s and 2004. It was a huge deal when I was in college in the 1990s. The Wikipedia article is pretty good, but of course assumes you know something about group theory to begin with. And I’ll be honest that I don’t remember most of the definitions involved, and have forgotten more than I do remember.
Simple groups are building blocks of all groups, in a loose sense similar to how prime numbers are the building blocks of the multiplicative ring of integers. The Jordan Hölder theorem makes this precise. So by classifying the finite simple groups, we have gained a massive leap in understanding how all finite groups work.
The Feit Thompson theorem, which was probably the first glimpse that such a thing was possible, is about 255 pages of very dense mathematics. It was important enough that it took an entire issue of the journal in question. This theorem says that every finite simple group of odd order is solvable, and the proof was certainly the longest up to that point in group theory, and possibly the longest in any journal to that point. It’s not the longest any more, though. One proof (Aschbacher and Smith, on quasithin groups) ends up being about 1300 pages!
But the point here is that there was no reason to believe this was even possible. Especially in such a small number of naturally occurring families — alternating, cyclic, Lie groups, derived subgroups of Lie groups, and 26 groups that don’t fit anywhere else. Pretty amazing.
As to why math problems are getting harder so solve, I’m not sure that’s true. We’ve got better tools to tackle the hard problems now, but things have gotten somewhat specialized. For a non-specialist it’s hard to even understand what some of the problems are asking about — the Hodge Conjecture, one of the millennium problems, is definitely in this category. And mathematicians want to generalize things as broadly as possible, so sometimes helpful details get lost. So the complexity may not be essential, just in the presentation and the fact that lay people, even those in other specializations of math, don’t know the jargon involved. This is, of course, a double edged sword since often the more general questions can actually be easier to tackle, having lost unnecessary details that get in the way.
However, many problems that were super difficult in the past fall down fairly easily now. Take partial differential equations, for instance. We have tons of numerical methods that didn’t exist 100 years ago, along with the computers to run them on, so engineers and scientists don’t do as much analytic equation solving. Numerical approximations are generally sufficient unless you’re working in some unstable region of an equation or are try to prove some qualitative thing about the system involved. And we can deal with many of those too.
With respect to machine learning I would recommend to have a look at numpy, matplotlib, scipy and especially http://scikit-learn.org/ (a standard machine learning library for python). I would try to implement the respective algorithms by my own in an efficient way and afterwards compare the results or write some add-ons (like adding a new class of distributions etc.).
Generally to get in touch with python I would like to mention a different approach. When I taught python to a motivated group of first semester students in Biology I initially asked them about what they were interested in to learn. From their answers I assembled a quite complex task for beginners:
They were supposed to setup an artificial ecosystem including mice and cats (both male and female, with an age, natural death rate, movement rules, etc.) to end up in a Lotka-Volterra-like Equilibrium. Afterwards we checked that together by fitting the Lotka-Volterra-Equations to the simulated data. The system and the fits were visualized with some graphs (animated) and a 2D arena showing the individuals as little moving pixels. The students liked this approach pretty much, because it was a hands-on strategy involving most basic ideas like loops and conditions, lists, numpy arrays, functions, classes, plotting, GUI and animation as well as concepts related to modelling and simulation like probability theory, statistics/data analysis, differential equations, dynamical systems, transferring ideas from math to algorithms and the other way around.
I will never forget the moment when the students realized that what they just did over the last seven 90min sessions + homework was nothing else than implementing the difficult math from their math course without noticing it by just playing around, involving their own ideas on how to make the system more realistic etc. All that started from the concept of a simple for-loop and a list. They were extremely fascinated by watching those moving dots on their screens, appearing, mating, dying and finally after some parameter tuning ending up in an steady state solution.
The range of the course topics was quite big given the time and we did not use any script or so - just hands-on coding but it worked out pretty well and the feedback was positive.
One of the six students is now doing a PhD in neuroscience, another one in evolutionary ecology.
From that experience I can highly recommend such an approach for learning programming in general as it involves most of the important concepts and ideas in a playful - but still science related - way. For me that would also be the project of choice for getting in touch with the concepts of a new programming language.
Tividar, as a scientist I see gram posititive and gram negative cocci in your illustration. Just a thought.
The classification theorem for finite simple groups might be a good candidate though. The classification theorem takes up tens of thousands of journal pages, and mostly between the 1950s and 2004. It was a huge deal when I was in college in the 1990s. The Wikipedia article is pretty good, but of course assumes you know something about group theory to begin with. And I’ll be honest that I don’t remember most of the definitions involved, and have forgotten more than I do remember.
Simple groups are building blocks of all groups, in a loose sense similar to how prime numbers are the building blocks of the multiplicative ring of integers. The Jordan Hölder theorem makes this precise. So by classifying the finite simple groups, we have gained a massive leap in understanding how all finite groups work.
The Feit Thompson theorem, which was probably the first glimpse that such a thing was possible, is about 255 pages of very dense mathematics. It was important enough that it took an entire issue of the journal in question. This theorem says that every finite simple group of odd order is solvable, and the proof was certainly the longest up to that point in group theory, and possibly the longest in any journal to that point. It’s not the longest any more, though. One proof (Aschbacher and Smith, on quasithin groups) ends up being about 1300 pages!
But the point here is that there was no reason to believe this was even possible. Especially in such a small number of naturally occurring families — alternating, cyclic, Lie groups, derived subgroups of Lie groups, and 26 groups that don’t fit anywhere else. Pretty amazing.
As to why math problems are getting harder so solve, I’m not sure that’s true. We’ve got better tools to tackle the hard problems now, but things have gotten somewhat specialized. For a non-specialist it’s hard to even understand what some of the problems are asking about — the Hodge Conjecture, one of the millennium problems, is definitely in this category. And mathematicians want to generalize things as broadly as possible, so sometimes helpful details get lost. So the complexity may not be essential, just in the presentation and the fact that lay people, even those in other specializations of math, don’t know the jargon involved. This is, of course, a double edged sword since often the more general questions can actually be easier to tackle, having lost unnecessary details that get in the way.
However, many problems that were super difficult in the past fall down fairly easily now. Take partial differential equations, for instance. We have tons of numerical methods that didn’t exist 100 years ago, along with the computers to run them on, so engineers and scientists don’t do as much analytic equation solving. Numerical approximations are generally sufficient unless you’re working in some unstable region of an equation or are try to prove some qualitative thing about the system involved. And we can deal with many of those too.
With respect to machine learning I would recommend to have a look at numpy, matplotlib, scipy and especially http://scikit-learn.org/ (a standard machine learning library for python). I would try to implement the respective algorithms by my own in an efficient way and afterwards compare the results or write some add-ons (like adding a new class of distributions etc.).
Generally to get in touch with python I would like to mention a different approach. When I taught python to a motivated group of first semester students in Biology I initially asked them about what they were interested in to learn. From their answers I assembled a quite complex task for beginners:
They were supposed to setup an artificial ecosystem including mice and cats (both male and female, with an age, natural death rate, movement rules, etc.) to end up in a Lotka-Volterra-like Equilibrium. Afterwards we checked that together by fitting the Lotka-Volterra-Equations to the simulated data. The system and the fits were visualized with some graphs (animated) and a 2D arena showing the individuals as little moving pixels. The students liked this approach pretty much, because it was a hands-on strategy involving most basic ideas like loops and conditions, lists, numpy arrays, functions, classes, plotting, GUI and animation as well as concepts related to modelling and simulation like probability theory, statistics/data analysis, differential equations, dynamical systems, transferring ideas from math to algorithms and the other way around.
I will never forget the moment when the students realized that what they just did over the last seven 90min sessions + homework was nothing else than implementing the difficult math from their math course without noticing it by just playing around, involving their own ideas on how to make the system more realistic etc. All that started from the concept of a simple for-loop and a list. They were extremely fascinated by watching those moving dots on their screens, appearing, mating, dying and finally after some parameter tuning ending up in an steady state solution.
The range of the course topics was quite big given the time and we did not use any script or so - just hands-on coding but it worked out pretty well and the feedback was positive.
One of the six students is now doing a PhD in neuroscience, another one in evolutionary ecology.
From that experience I can highly recommend such an approach for learning programming in general as it involves most of the important concepts and ideas in a playful - but still science related - way. For me that would also be the project of choice for getting in touch with the concepts of a new programming language.