Background and context

Some background on me just so that you have a rough idea of where this review is coming from: I did Olympiads as a kid and have been involved in fairly math-heavy fields ever since, through university at least, depending on your definition of math-heavy. I’ve also been completely self-taught when it comes to the non-elementary-school math I know. So my plan on using my Math Academy subscription was to mostly brush up on things. I also did only the university-level courses.

And the usual disclaimer: your experience can be very different from mine. I’m just putting my thoughts out there because some people asked me to, plus it’s interesting to talk about learning and pedagogy. This is not positive/negative advertisement of any sort (and should not be taken that way). Their first month is fully refundable if you decide that it wasn’t worth it and don’t go ahead with them (for now at least), so trying it out for a month for first-hand experience would be best for you (and was one of the reasons I decided to subscribe myself). It’s $49 a month, which might be a lot for many people in certain regions, but considering you get a tutor that never tires and is available 24/7, it’s not a bad deal for others (I am in the latter set for now).

I remember coming across Math Academy from a couple of blogs I used to read, and when I read their approach to pedagogy, it felt pretty solid. At the time, though, I didn’t really have the bandwidth, but recently I’ve been making more time for myself and decided to try it out as one of the things that were long due on my to-do list.

That being said, as I said earlier, this is not an advertisement for it, just a bunch of my observations over the last week. I may or may not update this review with more information in the future. And after you read my review, you should also probably visit their website to get a better idea of what their approach is.

The Good

  • I liked how systematic and comprehensive the course content was (you can check the syllabus breadth on the website for each course). The knowledge graph (a better dependency graph, if you will) was one reason I wanted to try this too. Every time you finish a module (designed to take 5-15 minutes), you potentially unlock a new module that depends on it, so it at least tries to indicate what you know. And the dependencies you can see for each module (they’re either before you enter the module or on the specific content pages) also help you refresh your memory if you need it.
  • They also offered a choice of multiple modules (whose dependencies you already satisfy) to work through in parallel. The idea is that the sequential bottleneck of learning one topic at a time slows things down, and consolidation can happen for multiple separate topic “streams” simultaneously. This improves throughput and gives you a wider variety of topics to pick from (based on your preferences/whims), which is helpful if you have mental blocks or just prefer to revisit certain topics later.
  • The focus on spaced repetition is pretty good. Mainstream learning often lacks personalized retrieval optimization. This really needs to be personalized, in my experience. Systems like Anki are built on this idea (you rate recall difficulty, and the app adjusts the next review interval). Math Academy follows this too, which is quite good.
  • The diagnostic test is a good decision in many ways (though it has drawbacks, and what follows here is about an idealized version of a diagnostic test that I believe should be feasible to approximate to an extent). It helps check if you’re well-placed for the course and estimates your knowledge frontier to avoid topics you’re already proficient at. It still seems to give review problem sets in some foundational topics and is generally conservative in assuming what you know. Apparently, there’s also a holistic mode for some courses with a broader selection of foundational topics, but I didn’t encounter this setting for the courses I took. Another advantage is not wasting time/money on known material, which also avoids potential boredom. It also starts you near the border of your comfort zone, a good place to be at when you’re learning something new.
  • The amount of practice can be good or bad, but for most people, it’s good. Practice might be boring/annoying, but it’s really the only way to build better retrieval mechanisms. You can understand a concept well, but a month later it might be gone if you didn’t practice to make it stick. You might think you can just re-derive things on the spot, but internalizing something for effective future use (making it an almost subconscious tool) is different from being able to derive it.
  • Gamification can motivate people to put in more effort. It doesn’t provide “pure” motivation, I agree, but it seems to work. It can also have long-term downsides (reward association without intrinsic motivation, etc.). They have an XP system useful for tracking progress and possibly for internal curriculum design. The league system was distracting for me (UX-wise), so I turned it off, but it might help competitive people overcome motivation issues with math.

The Bad

  • The overall course format seems to lack a coherent “middle term” structure (it’s either the 10-minute lesson or the month-long course). The way lessons are broken down is also resistant to introducing the motivation behind concepts. That is, chunking of multiple concepts into themes seems minimal. This might be fine for kids without much context, but for relearners or those with more real-world context, better structure could improve motivation and understanding, as well as build better mental connectivity between concepts.

  • It is relatively easy to game the questions. Maybe it’s an issue with the format, but it’s not hard to brainlessly mix and match things from previous examples and overfit to being able to do only these types of questions and nothing else.

  • Proofs are usually there, but at least for my personal tastes, they could be improved - both more proofs and more rigorous ones (rather than resorting to things that look like “tricks” without deeper theory/things you find later on in the course). I might have missed some because diagnostic tests always placed me at the at-least-80%-progress point in courses, even after slight sandbagging, so foundational modules might have had more proofs.

  • Very grind-y, at times it feels like test prep instead of trying to develop an understanding of things, and a book should be much better for these purposes. For example, the diagnostic test flagged hypothesis tests for me (because I sandbagged that part due to wanting a refresher on it). It gave me like 10(?) different lessons on various combinations of how hypothesis tests are done (a separate lesson for left-sided, right-sided, both-sided tests, critical regions for the left-sided tests, critical regions for the right sided tests, hypothesis test for <insert distribution> and maybe some other lessons that I might be missing), which I felt could have been compressed much more tightly as well as had a better variety of questions to stress-test my understanding. The amount of calculations also reminded me of test prep instead of learning something on my own joyfully. I admit this could be good for others, but for me and my use-case, it could have been more beneficial to have something more fine-grained. It felt like the process is oriented for simplicity/efficiency/time-to-mechanical-proficiency in some sense.

  • The quality of lessons has noticeable variance across courses/topics, but I don’t think it’s that big of an issue. Developing a curriculum is hard and needs iteration (which is also why it is a beta program for now, after all).

  • Yes, the lack of a skip button can be good for many people, and is more important in a setting where the time of a tutor is precious or the student is not well-calibrated about their learning progress. But one thing that put me off was how one mistake on the diagnostic test subjects you to a drudgery of lessons and exercises that you really want to stop because it’s something you already know and just had a bad day! They do have a way to take a diagnostic test again in order to rebuild your knowledge profile from scratch, but in my opinion the issue is deeper - in trying to build a knowledge graph based on the results of the tests, the test results are conflated with an assessment of the understanding of the topic, which is not quite precise enough (if at all usable) for such purposes.

  • For most people who already have been taught things in a certain way, it might be jarring/confusing to learn things in a different way (and the instructor needs to be way more conservative in assuming what you know). I could also feel this when I relied on sheer problem solving skill to solve problems, instead of using the concepts in the expected solutions in the diagnostic test, which probably confused their system about what I know/recall (not just according to their own system of progressing through concepts, but also in the broader concept space). It might just be an issue with diagnostic tests, and can potentially be fixed by asking people to write up their solutions and perhaps using an LLM to figure out which concepts they used (and sometimes asking them interactively for more clarification on what they understood).

    An example is how linear algebra can be taught in different orders. It is not a problem in disambiguating which topological order of the knowledge graph we have been taught things in (which is not important due to the knowledge graph being modelled this way already), but rather the dependency chains not being the same across people (for example, matrices -> linear transforms? linear transforms -> matrices? wedge products -> determinants? determinants -> inverse of a matrix?). One reason why this is hard for math is that you can often choose your starting point based on what you already know. So accommodating other learning trees seems hard if you have a knowledge-graph that you assume people follow, and one of the only reasonable ways (other than to have a way to roughly figure out relevant aspects of what path the person took, and a learning plan at hand corresponding to something close to that) is to be conservative about assuming what the person knows, which is still not optimal for many reasons. Perhaps a more top-down approach gives more breathing space to a person in this sense.

Final thoughts

I think Math Academy serves some audiences extremely well, and others less so. It’s hard to describe precisely, but fundamentally, developing a deep-enough understanding (of at least university-level math) might be difficult using only Math Academy. You’d likely be better off using a detailed book as your main reference, with Math Academy as a supplement for practicing algorithms/procedures to solve problems and spaced repetition. In that vein, it seems great for test prep.

Again, this is my own, very personal opinion. I believe that the underlying motivation behind Math Academy is good (and we really need changes like this in education systems around the world). It works well for a lot of people, as can be noticed from the whole lot of positive reviews you can find on the internet. So if you like it and don’t agree with the review above, don’t stop using Math Academy.

And as always, feel free to contact me if you have any comments/questions/ideas around this topic!