24 HOUR RAGE HYPERFOCUS DEEP WORK SESSION

24 HOUR RAGE HYPERFOCUS DEEP WORK SESSION

March 11, 2024
productivity

24 HOUR RAGE HYPERFOCUS DEEP WORK SESSION #

Purpose

  • I am going to spend the next 24 hours working on technical tasks for all of the projects that I am currently working on with other people.
  • I am very interested in seeing how my productivity improves or worsens as time progresses. I am also curious about how much work I am actually able to accomplish in a long period of time of arbitrary length.

Operationalizing Productivity

  • I will measure my productivity on a somewhat regular interval during this period. I will write updates in this document after completing each task or when I experience a shift in mood or energy or some other state.
  • This metric is subjective, but will still give me a pretty good idea of how I utilize long periods of time to do work.

Logistical Details

  • I am starting the experiment at 12pm 3/10/24 at Barnes & Noble.
  • I plan on staying at B&N until it closes at 6pm, or when I feel the need to change location for any reason.
  • After, I plan on going to a public library, or working from home.
  • I will end the experiment at 12pm on 3/11/24.
  • If I have struggled with a technical task for 30-45 minutes, and I have not made any progress, then I will switch to a different task.
  • I’ll hypercaffinate whenever I want more dopamine. I won’t restrict myself as to how much caffeine I intake.

Planned Tasks

  • Verify experimental results for machine learning classifier automation for side channel vulnerability detection. Write a test suite to confirm results of recursive feature elimination. Ship results to Tim Lesch and iterate. Implement the Perceptron classifier by extracting model coefficients instead of feature importances. Run the analysis script on the 13k feature dataset. Simplify the code by learning more NumPy and ChatGPT.
  • Run machine learning pipeline to extract most predictive trigger pair measurements for side channel leakage
  • Learn how to use the Microsoft ONNX javascript (front end) framework. Understand how the ONNX format works at a high level and implement the falconai T2T summarization model. Work with Camden to implement this on his flask pptx summarization site.
  • Fix broken prototype of web scraper for griffin events. Determine the most expedient method of formatting the results. I think that Michael found a json file on one of the events sites, so this may be easier than expected. Work with michael to format the results in a format that is the most intuitive for him to use on the front end.
  • I am also open to working on different tasks than what are listed here.

Log 11:59a - I started working on my research project.

01:00p - I figured out the bug! There was a problem with how I was retrieving the indices of the weighted features.

02:07p - I implemented the bugfix. I read more numpy documentation and was able to simplify several parts of our analysis code. I also read the source code for the RFE implementation in sklearn, and found a better way of predicting with data and transforming the dataset by eliminating features. I started an automation run that reduces eight features on each iteration of the feature reduction process, which I expect to take over twelve hours based on past runs.

02:12p - I started working on the onnx js runtime project.

02:40p - Finished installing yarn, vite, onnxruntime-web. Also got my first cup of coffee.

02:55p - Going to drive home and find somewhere new to sit.

03:34p - I realized I forgot about lunch, so I ate. Back to work now.

04:00p - Progress is much slower because this toolkit is completely new to me. I don’t think I’m being inefficient, I’m just learning a lot and not implementing much yet. Nevertheless, I’m still bothered by it.

04:08p - I got their toy example working.

04:27p Finally realized why the model card didn’t have implementation details. It follows a universal standard for a subtype of transformer models called T5[a].

05:15 Sentencepiece is Google’s implementation of the byte pair encoder and some other improvements. It looks like it’s the main tokenizer they use for language models. I found a javascript wrapper for the library that was compiled to wasm.

05:15p I’m definitely losing energy and focus. I got another cup of coffee. I’ll make myself push through.

07:20p I’ve learned some about ONNX, but I haven’t been able to implement T2T on the client. I’ll come back to this later.

07:21p I’m going to work on the web scrapers for griffin events.

07:48p After some REing of network requests, I figured out how to query for massive time periods on all the W&M events sites to get all the events over that period. You don’t even need to log into tribelink to see events, which seems like a security flaw, but hey, it’s less work for me. The next step is to write a python script to query these apis, then parse the json into a universal format that Michael can easily use.

09:15p I took half an hour for dinner and to hang out with my family. Back to work.

10:38p I was able to fix our web scrapers and return all the event info in a convenient format. I submitted a PR to our repo, so hopefully I can get some feedback on my code from Aadil or Michael.

10:41p It was really good to switch back to an easier task. It’s time to torture myself again with the onnx runtime for web.

10:56p Instead of immediately jumping into implementing the model in the web runtime, I think I’ll do it locally in python first, which will be simpler. Maybe I can experiment and learn a bit more this way.

11:26p Michael accepted my PR so I started writing some QOL changes for him when he’s parsing the events. 11:28p I’m going back to work on implementing the T2T model locally.

11:49p Coffee

12:41a This is kind of terrible.

12:53a screw me man, transformers.js was a thing the entire time? Apparently it’s built on top of the ONNX runtime. Huh, surprising that the thing that has eluded my ability to implement was so complicated HF built a layer of abstraction over it…

01:35a Implemented T2T summarization model in transformers.js. I’ll let Camden know and we can talk about the next steps for the project.

01:36a I’m 13 hours in and I basically accomplished everything I wanted to.

01:44a My machine learning automation has used 6.7 days of cpu time. Not finished yet, though.

01:48a I’m going to do some research on tools we can use for all the comm services GE will need.

02:00a Zapier unfortunately has everything we need. I’m not paying for that though. I think that Michael may be right in that we will end up having to implement everything separately. If this is the case, it’s probably not as bad as I’m imagining. At first, we can just focus on the basics. Email, groupme, discord, slack…

02:15a Diminishing returns

02:46a My machine learning automation run finished.

02:51a The results seem to converge on 96.943% accuracy for most classifiers. This seems kind of strange, considering the selected features for each classifier are different.

03:10a I got a bowl of cereal for breakfast

03:46a I implemented the perceptron classifier. There was a dumb bug caused by a missing underscore, but it’s solved now.

03:52a Diminishing returns

03:55a I accomplished all the major tasks I wanted to, and there’s not really anything else I want to start right now. I don’t think it would be productive to continue.

04:00a gn 🤗

Caffeine Total

  • Grande Mocha 175mg 2p
  • Keurig Coffee x2 2x80mg 11p, 12p
  • Diet Pepsi 35mg 1a
  • Keurig Coffee 80mg 1:46a
  • Keurig Coffee 80mg ~3a
  • Total: 530mg. A typical amount. I’ve done a gram in a day before, so this was actually pretty reasonable.

Reflection

  • Level of Success

    • I was able to execute and get everything done. That’s great! I only ended up using 18 of the 24 hours.
    • I think that if I had gone about it normally, I could have accomplished these tasks in three days. The ML work and web scraping wouldn’t have been too bad, but I’m pretty happy I was able to research and implement the summarization model in transformers.js.
    • I am annoyed that I spent so much time trying to get a basic example working for the onnxruntime-web library. I guess there was nothing I could have done to change this besides refine my search queries, but it nevertheless is time wasted. I think that I would have wasted this time whether or not it was in one working session, so I think that it was good that I was able to do it all at once.
    • I realize that my perceived productivity is a function of the difficulty and level of experience with the task. My research and web scraping went by very quick, because I have a high level of familiarity and knowledge about them. It’s also true that the level of satisfaction I received from completing those tasks is low because I expected to be able to complete them without much struggle.
  • Setting Specific Goals

    • At 2am I started to waste time because I lost direction for what I was doing. I was trying to do research on tools we can use, but that’s kind of an open ended task, and I started daydreaming a little bit. I think that when you work on open-ended tasks, you need a higher level of focus, because microdistractions are prevalent within the space between actions.

    • I know that in the past when I have written code all night, I was able to focus super well because I had a specific goal I was focused on.

    • In the future, when I work for long periods of time, I think that it will be prudent to mentally center myself around a specific goal or set of specific goals to accomplish. When I get tired or distracted, I need myself to continue working on the task subconsciously, like by muscle memory.

    • Even if my consciousness isn’t as engaged, when I have a specific goal and I know what to do next, my actions will flow from my mentality to execution as naturally as the flow through a river.

    • Rapid microiterations are my compensation for stupidity. Improve an iota in ten seconds and extrapolate years in the future by the inductive principle.

  • Conclusion

    • My level of productivity is way more deterministic than I thought. This is really good news. If I check all the boxes, I’ll get what I want.