We make a study app. We think it's a good one. We also think a fair amount of marketing in this category is dishonest about what software can and can't do for a learning student, and that dishonesty hurts the students it's meant to help.
So here's a piece, written by the people who make the tool, about what the tool — and tools like it — can't do. It's a relevant question. The answer changes how you should use any AI study tool, including ours.
The short version: AI is good at maybe 70% of the cognitive work of being a student. The other 30% is human, embodied, social, or just plain not solvable in a chat box. Knowing the line is the difference between using AI well and over-relying on it in ways that quietly hurt your education.
What AI study tools are actually good at
Before the limits, the strengths. Otherwise the post reads as anti-tool, which isn't the point.
AI study tools, when they're well-built, do a few things genuinely well.
Pattern-matching against your weaknesses. A system that watches a hundred quizzes and tracks which topics keep failing can give you a cleaner picture of your gaps than you'd have on your own. Memory, as Fennie calls it, isn't magic — it's just consistent attention.
Generating practice on demand. A flashcard from a note in two clicks. A quiz from a chapter in twenty seconds. The cost of producing more practice has dropped to roughly zero, which matters because the bottleneck on most studying isn't knowledge of how to study — it's the friction of producing the next practice item.
Patient explanation. A chat tutor will explain the same concept four different ways without getting bored. A human tutor charges $40 an hour to do this. The tireless quality matters more than people think.
Schedule arithmetic. "Given everything on your calendar and what you've struggled with, here's what to do today." The math is annoying for a human to do but trivial for a system.
That's the 70%. It's a real 70%. It used to take hours to do this stuff manually, and most students didn't.
Now: the 30%.
What live discussion does that AI can't
Sit in a small seminar with eight other students arguing about a difficult text. The professor pushes back on a half-formed thesis. Someone in the corner says something unexpected that reframes the whole argument. You think of a counterexample on the spot.
This is irreplaceable. Not because AI couldn't simulate a discussion — it can, and it's getting better. But because the people you're disagreeing with are real, present, and going to remember the argument tomorrow. The social stakes change what you say. The unpredictability is the point.
A pre-law student preparing for cold calls cannot replace a study group full of other 1Ls who will challenge them in the same way the professor will. A philosophy student can read every secondary text alone, but won't develop the nimbleness of holding a position under live argumentative pressure without being argued with.
This is one of the strongest cases for keeping AI study tools out of certain modes. If your major is built on live argumentation — law, philosophy, debate, literature seminar — your AI use should be heavy on the encoding side and light or absent on the discussion side. The discussion has to happen with humans.
What we do at Fennie: we don't try to simulate seminar discussion. The chat will help you prepare for the seminar — outline arguments, anticipate counterpoints, sharpen your questions — but the seminar itself is a thing it stays out of.
Lab work and embodied skill
If you're studying organic chemistry, you can do a lot of the encoding in Fennie. Mechanisms, reactions, naming conventions. All of that fits a notes-and-flashcards-and-quizzes structure beautifully.
What doesn't fit: holding the actual flask. Smelling whether the reaction smells right. Recognizing when the color change happened too fast. Calibrating the technique to your specific hands. Knowing what to do when the rotovap squeals.
This is true across the lab sciences. Bio dissection. Physics lab. ChemE unit operations. CS systems work where you have to actually wrestle with a real cluster, not just write code that compiles.
The fix is not to use less software. The fix is to be honest that the tool is helping with the cognitive scaffolding of these courses but not the embodied skill. Spend lab time as lab time. Don't try to learn lab skill out of a video at home.
A clinical student studying for STEP cannot replace third-year clerkship time with flashcards. A nursing student cannot pass the practical with Anki. A welding student cannot weld with a chat tutor. Be cautious of any tool that suggests otherwise.
Office hours and the question you didn't know to ask
Office hours are underused for a specific reason. Students show up with a list of confused questions and try to get them answered. This is a fine use, but it's not the highest-leverage use.
The highest-leverage use is sitting with the professor or TA, working through a problem, and watching them think. The way they decide what to try first. The thing they pause on that you would have skipped. The aside they make about the actual scientific or legal context that the textbook doesn't cover.
This is meta-skill. It's how you learn how an expert reasons. AI explanations are good — better than your textbook for many things — but they don't show you a person reasoning in real time. You can't ask why the explanation chose that angle. You can ask, but the answer is post-hoc rationalization, not the actual reasoning trace.
Show up to office hours with one hard problem. Watch them work it. That's the highest-value 25 minutes you'll spend in the course.
Mental health and burnout
I want to be careful here, because this is the section where over-claiming would do real harm.
AI study tools cannot tell when you're depressed. They cannot tell when you're pushing past a healthy limit. They cannot tell when "behind" is the symptom of something deeper than poor planning.
A daily plan that arrives every morning is a useful tool when you're in baseline shape. It's a punishing tool when you're in crisis. It says do these four things, and if you can't, the gap between the plan and your actual capacity becomes its own source of shame.
If you're in that mode, please talk to a human. Counseling services on most campuses are free or low-cost. National Suicide Prevention Lifeline: 988. Your friends, your RA, a professor you trust.
What we try to do at Fennie: not pile on. The morning plan defaults to short and doable. We don't send aggressive notifications. We don't gamify streaks. The system is designed to ease off, not lean in, when you're under-performing the plan. But software is software. It can't see the human inside the user.
If a study tool is making you feel worse and not better — if opening the app produces dread, if the plan keeps failing and the gap is growing — that's a signal to step back. Not a signal to use it harder.
Bad notes that no tool can rescue
A small one, but it comes up.
If your notes are bad — bullet points without context, equations without setup, terms without definitions — then anything you generate from them (flashcards, quizzes, lessons) inherits the badness.
A flashcard from a confusing note is a confusing flashcard. A quiz from incomplete material is an incomplete quiz. The system will surface these as you fail them, but the root fix is to take better notes during class or to clean them up the same evening.
This isn't really a limit of AI study tools. It's a limit of garbage-in-garbage-out. But it matters because students sometimes assume the tool will fix the input. It can't. It can highlight that the input is broken — repeated card failures on a topic, fuzzy quiz questions — but it can't write the original note for you.
The integration model
So given all that, how should you actually use AI study tools?
The model I'd suggest: AI for the 70%, humans for the 30%, and intentional handoffs between them.
The 70% AI handles well. Daily planning. Encoding via lessons and notes. Spaced flashcard scheduling. Practice quizzes. Topic-level mastery tracking. Calendar-aware ramping for tests. Patient repeated explanations of concepts.
The 30% humans handle better. Seminar discussion. Lab and clinical skill. Office hours reasoning observation. Study groups and live argument. Mentor conversations about career and life. Counseling and mental health. Writing critique that requires understanding your specific voice over a semester.
The handoff. The AI prepares you for the human. You use Fennie to know the material, then bring the live questions to office hours. You use Fennie to outline an essay, then take the outline to your writing group. You use Fennie to drill biochem, then go to your lab.
When the integration works, the human time is high-leverage. You're not asking the professor "can you re-explain this concept" — they can, but a chat tutor can do it for free. You're asking the professor what they think about a specific tension you noticed. You're asking the lab partner how they're handling the same hard step. You're asking the seminar group whether your interpretation of paragraph four holds up.
That's the right relationship. AI under, humans on top. Not AI replacing humans. Not humans being protected from AI by avoiding it. AI is the bench. Humans are the field.
What this means for choosing a tool
A reasonable question after reading all this: how do you tell which AI study tools are honest about their limits?
A few signals.
Tools that claim to "replace your professor" or to do "all your homework" are lying or aspiring to facilitate cheating. Either way, walk.
Tools that won't shut up — endless notifications, streaks designed to manipulate, AI characters that act like friends — are using engagement-product playbooks that are bad for studying.
Tools that are quiet, that default to short plans, that decline to give direct homework answers, that treat your time as a finite resource — these are at least trying.
We try. We won't always get it right. But the orientation matters.
Use it for the 70%. Keep humans in the loop for the 30%. Start with Fennie free — no card, no commitment.