AI Kept Forgetting My Notes. Fixing That Taught Me How It Actually Works.
Last Updated on May 4, 2026 by Editorial Team Author(s): Varshith Tipirneni Originally published on Towards AI. THE PROBLEM Three weeks into learning machine learning, I ran into a problem. Not with models or math, but with my notes. I had taken the time to write things in my own words, build analogies that made sense to me, and note down questions I wanted to revisit. The problem wasn’t quality. It was structure. My notes were scattered across different apps, formats, and styles. Some were in Notion, a few in Google Docs, and others buried in random text files. Nothing was consistent. Every time I sat down to study, I found myself spending the first twenty minutes just trying to reconstruct context. What had I already understood? Where had I left off? Which explanation had actually made sense last time? It felt like I was re-learning my own thinking before I could move forward. At some point, I tried something that felt obvious. I opened an AI chat, pasted my notes in, and asked it to help me study. For a while, it worked better than expected. The responses were aligned with how I thought, and it felt like I finally had something that could adapt to me instead of the other way around. For the first time, studying felt continuous instead of fragmented. That illusion didn’t last very long. When It Started Breaking The problems didn’t show up all at once. At first, things felt smooth enough that I didn’t question it. The AI was using my notes, explaining things in ways that made sense, and saving me time. It felt like the system was working. Then small inconsistencies started creeping in. Occasionally it would explain something using an example I didn’t recognize. Other times it would skip over details I was sure I had written down. It wasn’t entirely wrong, just slightly off. It is easy to ignore at first, but noticeable if you pay attention. I assumed it was just me. Maybe I hadn’t phrased something clearly. Maybe I had forgotten what I wrote. Then one response made me stop. I had asked it to explain a concept based on my notes, something I had already spent time understanding. It gave a clean answer, structured, confident, and easy to follow. But halfway through, it referenced a formula and attributed it to my notes. I paused because I knew that formula wasn’t there. I went back and checked. It wasn’t buried somewhere I had forgotten about. It simply didn’t exist in my notes. That’s when the problem became harder to ignore. The answer wasn’t obviously wrong. In fact, it looked correct. If I hadn’t been paying attention, I probably would have accepted it without questioning it. That’s a different kind of failure. Not something you can spot immediately, but something that quietly shifts your understanding without you realizing it. At that point, I stopped treating it as a minor issue. I wanted to understand why the shift was happening. Fixing the Input First Before trying to fix the AI, I had to fix my notes. Up until that point, the problem felt external. The AI was inconsistent, so I assumed the issue was with how it was responding. But the more I looked at my setup, the more obvious it became that I wasn’t giving it something reliable to work with. My notes had no consistent structure. Some were written as full paragraphs, others as bullet points. A few had analogies; some didn’t. Even when two notes covered similar topics, they were formatted completely differently. It made sense that I struggled to navigate them. Expecting an AI system to interpret them consistently was even more unrealistic. I moved everything into Markdown. Not because it’s a powerful tool, but because it forces simplicity. Plain text, lightweight formatting, and just enough structure to make things predictable. Each note followed the same pattern. A concept at the top, a short explanation, my own analogy, and a section for things I didn’t fully understand yet. It wasn’t perfect, but it was consistent. And that consistency mattered more than anything else. What surprised me was how much this changed things, even before bringing AI back into the picture. The notes became easier to scan, easier to revisit, and easier to build upon. I wasn’t spending time reinterpreting my writing anymore. I also added a few lines at the top of each file, including basic metadata like topic and difficulty. It didn’t change how I used the notes directly, but it made them easier to organize once I started treating them as a collection rather than isolated pieces. Looking back, this was the first real shift. The system didn’t start with the AI. It started with making the input structured enough to be usable. Moving Beyond Chat Up to this point, I was still using the AI through a chat interface. It worked for quick interactions, but it didn’t take long to feel the limitations. Every time I wanted to ask something, I had to paste my notes again or rely on whatever context was still in the conversation. It didn’t feel like a system. It felt like starting over each time. I wanted something that worked more consistently, where my notes were already part of the setup instead of something I had to reintroduce every session. That’s what pushed me to move beyond chat and use the API. In simple terms, this meant writing a small script that sends my notes and questions directly to the model and receives responses back. No chat window, no manual copy-pasting, just a structured request and a structured response. The shift itself wasn’t as complicated as it sounds, but it changed how I thought about the interaction. Instead of treating the AI like something I “talk to,” it started to feel more like a component I could build around. There were a couple of practical things that became obvious rapidly. The API key behaves like a […]
