<?xml version='1.0' encoding='UTF-8'?>
<rss xmlns:atom="http://www.w3.org/2005/Atom" xmlns:content="http://purl.org/rss/1.0/modules/content/" version="2.0">
  <channel>
    <title>Bear Blog Trending Posts</title>
    <link>https://bearblog.dev/discover/</link>
    <description>Trending posts on Bear Blog</description>
    <docs>http://www.rssboard.org/rss-specification</docs>
    <generator>python-feedgen</generator>
    <lastBuildDate>Sat, 02 May 2026 02:37:49 +0000</lastBuildDate>
    <item>
      <title>my favourite unconventional animal</title>
      <link>https://monocyte.blog/my-favourite-unconventional-animal/</link>
      <description>&lt;p&gt;with me writing this in the last days of april i am really consistent when it comes to procrastination with carnival posts. anyhow for this month's &lt;a href='https://oracleofsages.bearblog.dev/grizzly-gazette-carnival-april-your-favourite-unconventional-animal/'&gt;bearblog carnival&lt;/a&gt; hosted by &lt;a href='https://oracleofsages.bearblog.dev/'&gt;sage&lt;/a&gt;, i present you the wackiest animal of all time: the pumpkin toadlet.&lt;/p&gt;
&lt;p&gt;&lt;img src="https://bear-images.sfo2.cdn.digitaloceanspaces.com/monocyte/brachycephalus_ephippium02.webp" alt="picture of a pumpkin toadlet on a leaf" /&gt;&lt;/p&gt;
&lt;figcaption&gt;
picture of a pumpkin toadlet on a leaf by &lt;a href="http://calphotos.berkeley.edu/cgi/img_query?query_src=photos_photographers&amp;where-photographer=Ariovaldo+Giaretta&amp;orderby=taxon"&gt; Ariovaldo Giaretta  &lt;/a&gt;/&lt;a href="https://creativecommons.org/licenses/by-sa/2.5/"&gt;CC BY-SA 2.5&lt;/a&gt;
&lt;/figcaption&gt;
&lt;h2 id=why&gt;why?&lt;/h2&gt;&lt;p&gt;i mean it doesn't look all that weird on the outside. it's just a small little frog so why is it wacky?
first of all have you seen how small it actually is? like here, look at it compared to a hand.&lt;/p&gt;
&lt;p&gt;&lt;img src="https://bear-images.sfo2.cdn.digitaloceanspaces.com/monocyte/brachycephalus_ephippium-1.webp" alt="picture of a pumpkin toadlet compared to a hand" /&gt;&lt;/p&gt;
&lt;figcaption&gt;
picture of a pumpkin toadlet compared to a hand by &lt;a href="http://calphotos.berkeley.edu/cgi/img_query?query_src=photos_photographers&amp;where-photographer=Diogo+B.+Provete&amp;orderby=taxon"&gt;Diogo B. Provete&lt;/a&gt;/&lt;a href="https://creativecommons.org/licenses/by-sa/2.5/"&gt;CC BY-SA 2.5&lt;/a&gt;
&lt;/figcaption&gt;
&lt;p&gt;SEE HOW SMALL THAT IS???? also you might not realise but being this small comes with a lot of sacrifice, mostly in relation to how underdeveloped their inner ears are.&lt;/p&gt;
&lt;h2 id=so-just-how-underdeveloped-are-their-ears&gt;so just how underdeveloped are their ears?&lt;/h2&gt;&lt;p&gt;&lt;img src="https://bear-images.sfo2.cdn.digitaloceanspaces.com/monocyte/gif-2.gif" alt="gif of a pumpkin toadlet jumping" /&gt;&lt;/p&gt;
&lt;figcaption&gt;
gif of a pumpkin toadlet jumping from Richard Essner of Southern Illinois University Edwardsville
&lt;/figcaption&gt;
&lt;p&gt;look at this gif of it jumping. it can jump alright but the landing? uhh not so much. it doesn't have the best semicircular canals to know just how it's moving in the air so it just jumps and hopes for the best. which usually means jumping and flopping about. it looks really goofy.&lt;/p&gt;
&lt;h2 id=ears-are-supposed-to-hear&gt;ears are supposed to hear&lt;/h2&gt;&lt;p&gt;one critical function of the ear is the ability to hear (wow, astonishing) and with just how small their ears are this ability is also lacking so much so that they can't hear their own species mating calls. they just go off of the visual cues like vocal sac inflation, hand waving and mouth opening.&lt;/p&gt;
&lt;h2 id=did-i-mention-poison&gt;did i mention, poison?&lt;/h2&gt;&lt;p&gt;yeah they are also highly poisonous. that's why they are able to get away with having not fully functioning systems. when we talk about an animal there are 3 major things that it needs to do in order to survive:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;eat&lt;/li&gt;
&lt;li&gt;move around&lt;/li&gt;
&lt;li&gt;mate&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;in this animal 2 of those 3 major functions are impaired. make of that what you will.&lt;/p&gt;
</description>
      <author>hidden (monocyte)</author>
      <guid isPermaLink="false">https://monocyte.blog/my-favourite-unconventional-animal/</guid>
      <pubDate>Thu, 30 Apr 2026 15:19:20 +0000</pubDate>
    </item>
    <item>
      <title>My ideal blog</title>
      <link>https://futureperfect.bearblog.dev/my-ideal-blog/</link>
      <description>&lt;p&gt;I wish more blogs were like the old school online journals. People just talking about their day. Not too much detail, not too confident in their opinions, not trying to prove any points, etc.&lt;/p&gt;
&lt;p&gt;This certainly is not an attack on anybody. My blog isn’t like that either. For some reason it just feels more natural to write the way you typically see others write on their blogs in the modern era(at least what I normally see on the discovery/trending pages on bear).&lt;/p&gt;
&lt;p&gt;That old LiveJournal/Xanga/MySpace blog style of post just felt more personal, though. Less like an op-ed.&lt;/p&gt;
&lt;p&gt;Maybe I’m just stuck in a nostalgia trap right now. But I want to start posting more like that.&lt;/p&gt;
&lt;p&gt;Tomorrow is Friday. This week has been excruciatingly long. Let’s finish strong and cruise into the weekend!&lt;/p&gt;
</description>
      <author>hidden (futureperfect)</author>
      <guid isPermaLink="false">https://futureperfect.bearblog.dev/my-ideal-blog/</guid>
      <pubDate>Fri, 01 May 2026 00:09:00 +0000</pubDate>
    </item>
    <item>
      <title>Bye Spotify, hello MP3s</title>
      <link>https://minimal.bearblog.dev/bye-spotify-hello-mp3s/</link>
      <description>&lt;p&gt;&lt;img src="https://live.staticflickr.com/65535/55242445300_211e328d31_b.jpg" alt="" /&gt;
I was on and off on different streaming subscriptions like Spotify or Apple Music for a long while. I would eventually cancel my subscriptions for 2 main reasons, price and the algorithm.&lt;/p&gt;
&lt;p&gt;In Germany, Spotify premium it's 12.99€ per month. That's not a huge price tag, but it's also not insignificant. I felt ok paying for it when I was regularly listening to music, but often times I just had periods when I would just listen to a couple of hours per month. So the expense just felt a bit unjustified.
The non premium version of Spotify is decent for a short listening session, but the ads get really annoying after a while.&lt;/p&gt;
&lt;p&gt;The second reason is that somehow I would end up listening to always the same music. I have a small number of artists and albums that I really like and I would put them on rotation. Sometimes I would go on the discovery playlist, but somehow I would always end up quite disappointed by the recommendations! I am sure that that's probably just me, since I heard people really like the algorithm recommendations.&lt;/p&gt;
&lt;p&gt;A couple of months ago I decided to do something different. I made a list of all the albums that I would really like to listen on a regular basis, and then ordered them from this used &lt;a href='https://www.medimops.de/'&gt;music/movies/books&lt;/a&gt; website.
Here is a list of what I ordered:&lt;/p&gt;
&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
  &lt;th&gt;Artist&lt;/th&gt;
  &lt;th&gt;Album&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
  &lt;td&gt;All Time Low&lt;/td&gt;
  &lt;td&gt;Nothing Personal&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
  &lt;td&gt;Blink-182&lt;/td&gt;
  &lt;td&gt;Enema of the State&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
  &lt;td&gt;Daft Punk&lt;/td&gt;
  &lt;td&gt;Random Access Memories&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
  &lt;td&gt;Eminem&lt;/td&gt;
  &lt;td&gt;The Marshall Mathers LP 2&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
  &lt;td&gt;Green Day&lt;/td&gt;
  &lt;td&gt;21st Century Breakdown&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
  &lt;td&gt;Green Day&lt;/td&gt;
  &lt;td&gt;American Idiot&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
  &lt;td&gt;Imagine Dragons&lt;/td&gt;
  &lt;td&gt;Night Visions&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
  &lt;td&gt;Linkin Park&lt;/td&gt;
  &lt;td&gt;Hybrid Theory&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
  &lt;td&gt;Linkin Park&lt;/td&gt;
  &lt;td&gt;Meteora&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
  &lt;td&gt;Nirvana&lt;/td&gt;
  &lt;td&gt;Best Of Nirvana&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
  &lt;td&gt;Red Hot Chili Peppers&lt;/td&gt;
  &lt;td&gt;Californication&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
  &lt;td&gt;Skrillex&lt;/td&gt;
  &lt;td&gt;Bangarang EP&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
  &lt;td&gt;Sum 41&lt;/td&gt;
  &lt;td&gt;All Killer No Filler&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
  &lt;td&gt;Sum 41&lt;/td&gt;
  &lt;td&gt;Chuck&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
  &lt;td&gt;Sum 41&lt;/td&gt;
  &lt;td&gt;Screaming Bloody Murder&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
  &lt;td&gt;The Beatles&lt;/td&gt;
  &lt;td&gt;1&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;p&gt;As you can see I have a bit of a soft spot for early 2000s punk rock music, with a bit of other things that I listened growing up. For the 16 CDs I ended up spending 61,92€.
Here are some pictures of them:&lt;/p&gt;
&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
  &lt;th&gt;&lt;img src="https://live.staticflickr.com/65535/55242190348_030e794f32_b.jpg" alt="" /&gt;&lt;/th&gt;
  &lt;th&gt;&lt;img src="https://live.staticflickr.com/65535/55241141017_f559d0f3d2_b.jpg" alt="" /&gt;&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;p&gt;I am planning to do another order soon as soon as I have made my mind on what CDs I would like to add.&lt;/p&gt;
&lt;p&gt;I considered using an old mp3 player that I have for playing them, but I decided to just use my phone and the VLC app. The app is not the best, but it's free, open source, not filled with ads and has at least the basics covered (Albums, playlist, random playing).
Here is how it looks like:&lt;/p&gt;
&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
  &lt;th&gt;&lt;img src="https://live.staticflickr.com/65535/55242282629_f265c5d874_b.jpg" alt="" /&gt;&lt;/th&gt;
  &lt;th&gt;&lt;img src="https://live.staticflickr.com/65535/55242047546_396d2fdb7d_b.jpg" alt="" /&gt;&lt;/th&gt;
  &lt;th&gt;&lt;img src="https://live.staticflickr.com/65535/55242190323_3e9eb8d768_b.jpg" alt="" /&gt;&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;hr /&gt;
&lt;p&gt;I want to conclude with some thoughts on this process:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;I really loved the search for the albums that I wanted to buy. I spent hours listening on YouTube to songs in order to decided what I liked most. It feels somehow more definitive when you invest some money on a CD versus just opening a streaming app.&lt;/li&gt;
&lt;li&gt;The limited number of songs (approx 230 songs / 14h listen time) has not been boring. Actually quite the opposite. Not having the freedom to just skip for something else makes me just appreciate the song more and not have fomo. I often felt like Spotify was a tiktok-like experience.&lt;/li&gt;
&lt;li&gt;Not having ads is great. Not having a recurrent fee is also great. Knowing that I now own the music and I don't need an internet connection to play them is just awesome.&lt;/li&gt;
&lt;li&gt;Listening to a full album is something that I missed. There is something very relaxing to go from the first till the last song of an album and not just get the greatest hit.&lt;/li&gt;
&lt;li&gt;I do still occasionally listen to the non-premium version of Spotify. The I find the desktop experience better and with less ads.&lt;/li&gt;
&lt;li&gt;I often put on the radio when I want to listen something completely new. There are many radios which are ad-free (for example, &lt;a href='https://www.dasding.de/index.html'&gt;DASDING&lt;/a&gt;) and do some small moderation while proposing top pop hits.&lt;/li&gt;
&lt;li&gt;I have since the first order bought some other CDs from used stores around my city. The price it's even lower than online, usually ~1€. This makes for a fun and sporadic activity.&lt;/li&gt;
&lt;li&gt;The CDs make for a nice decorative item our home, and a conversation starter, especially if it's an album that the guest likes as well.&lt;/li&gt;
&lt;/ul&gt;
</description>
      <author>hidden (minimal)</author>
      <guid isPermaLink="false">https://minimal.bearblog.dev/bye-spotify-hello-mp3s/</guid>
      <pubDate>Fri, 01 May 2026 13:03:00 +0000</pubDate>
    </item>
    <item>
      <title>Crescent - dataframe library in C23 built using safe_c.h and cforge</title>
      <link>https://hwisnu.bearblog.dev/crescent-dataframe-library-in-c23-built-using-safe_ch-and-cforge/</link>
      <description>&lt;p&gt;&lt;em&gt;This article contains a lot of images, if your browser failed to load them, try a different browser. Firefox based browsers looks to have issues loading the images.&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;&lt;em&gt;Another note: it is a very long article, I was thinking of splitting it but this bearblog account is my documentation medium anyway, so I apologize in advance for readers getting annoyed at the length of this article.&lt;/em&gt;&lt;/p&gt;
&lt;h2 id=introduction-building-a-pandas-replacement-in-modern-c23&gt;Introduction - Building a Pandas Replacement in modern C23&lt;/h2&gt;&lt;p&gt;I've been maintaining a side project called &lt;code&gt;valueHunter&lt;/code&gt; ~ a screener for stock markets, used as internal tool for office work. It loads about 970 stocks, runs four screening strategies (undervalued growth, deep value, safe tech, composite), then does about 14 DataFrame analytics operations on the result. Basically a small data pipeline.&lt;/p&gt;
&lt;p&gt;The original version was Python with Pandas. It worked fine, but for someone used to work with low level languages, Python dataframe libraries such as Pandas and Polars left me looking for more speed. Added that I'd been wanting to build something finance-related project in modern C23 for a while, so one weekend I thought: "let me just build my own dataframe in C23." After all I've been building quite a lot of my internal tools in C, Zig and Rust. Well, let's just say that weekend (around 4 months ago) turned into a rabbit hole. I ended up with a custom DataFrame library called Crescent, built using my own custom build system (cforge), while implementing safe_c.h standard as the header file, and even though the performance result is kinda expected, but still it surprised me.&lt;/p&gt;
&lt;h2 id=the-numbers-first&gt;The Numbers First&lt;/h2&gt;&lt;p&gt;Three libraries, one pipeline, four data sizes. The same code path: load CSV → 4 screening strategies → 14 DataFrame operations. All produce identical output.&lt;/p&gt;
&lt;h3 id=small-data-970-rows-430-kb&gt;Small data (970 rows, 430 KB)&lt;/h3&gt;&lt;p&gt;&lt;img src="https://i.postimg.cc/SRJfz8mg/crescent-00.png" alt="crescent-00" /&gt;&lt;/p&gt;
&lt;p&gt;That's &lt;strong&gt;~49x faster than Pandas&lt;/strong&gt; and &lt;strong&gt;~32x faster than Python/Polars&lt;/strong&gt; (Python numbers from prior measurement, marked ††).&lt;/p&gt;
&lt;p&gt;But fast at 970 rows is expected. What matters is how it scales.&lt;/p&gt;
&lt;h3 id=scaling-to-19-million-rows-866-mb&gt;Scaling to 1.9 million rows (866 MB)&lt;/h3&gt;&lt;p&gt;&lt;img src="https://i.postimg.cc/NF5krTQ7/crescent-01.png" alt="crescent-01" /&gt;&lt;/p&gt;
&lt;h3 id=resource-breakdown-by-dataset&gt;Resource breakdown by dataset&lt;/h3&gt;&lt;p&gt;&lt;img src="https://i.postimg.cc/L5n3ZfmL/crescent-02.png" alt="crescent-02" /&gt;&lt;/p&gt;
&lt;p&gt;Crescent wins at every size. On small data it dominates (10.8x faster than Rust). On large data it still leads ~ 3.3x faster at 1.9M rows, using &lt;strong&gt;505% CPU&lt;/strong&gt; (up from 107% at small sizes) and &lt;strong&gt;2.6 GB RAM&lt;/strong&gt; vs Rust's &lt;strong&gt;5.0 GB&lt;/strong&gt;. Rust burns &lt;strong&gt;300-800% CPU&lt;/strong&gt; between the smallest and largest data sizes ~ average 2-3x more cores for a slower result.&lt;/p&gt;
&lt;p&gt;At 1.9M rows, Crescent uses &lt;strong&gt;half the RAM&lt;/strong&gt; and much less CPU compared to Rust/Polars while running faster.&lt;/p&gt;
&lt;p&gt;The question everyone asks: "what about a real-time webapp?" The C binary processes the full pipeline in 12 ms. Rust takes 130 ms. Python takes 400+ ms. For a webapp endpoint called on every page load, 12 ms vs 130 ms is the difference between "instantly" and "noticeable lag."&lt;/p&gt;
&lt;h2 id=wait-isnt-c-supposed-to-be-hard&gt;Wait, Isn't C Supposed to Be Hard?&lt;/h2&gt;&lt;p&gt;Yes and No. With the  tools available in modern C, it's less hard than it used to be. Here's how Crescent works.&lt;/p&gt;
&lt;p&gt;The setup has three layers:&lt;/p&gt;
&lt;h3 id=layer-1-safe_ch-the-utility-belt&gt;Layer 1: safe_c.h ~ the utility belt&lt;/h3&gt;&lt;p&gt;&lt;a href='https://hwisnu.bearblog.dev/giving-c-a-superpower-custom-header-file-safe_ch/'&gt;Read more on safe_c.h here&lt;/a&gt;
&lt;code&gt;safe_c.h&lt;/code&gt; is a single-header C library that adds things C should have had from the start:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;RAII macros&lt;/strong&gt; via &lt;code&gt;__attribute__((cleanup))&lt;/code&gt; ~ resources auto-clean when they go out of scope&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Type-safe vectors&lt;/strong&gt; via &lt;code&gt;DEFINE_VECTOR_TYPE(name, type)&lt;/code&gt; ~ basically generics for dynamic arrays&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Result/Optional types&lt;/strong&gt; ~ &lt;code&gt;Result&amp;lt;T, Error&amp;gt;&lt;/code&gt; monads without the ceremony&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;StringView, Span, smart pointers&lt;/strong&gt; ~ stuff you'd recognize from Rust or C++&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Here's what it looks like in practice:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;span class="c1"&gt;// File auto-closes when scope exits&lt;/span&gt;
&lt;span class="p"&gt;{&lt;/span&gt;
&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="n"&gt;AUTO_FILE&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;f&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s"&gt;&amp;quot;data.csv&amp;quot;&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s"&gt;&amp;quot;r&amp;quot;&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="c1"&gt;// use f...&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;  &lt;/span&gt;&lt;span class="c1"&gt;// fclose() called automatically&lt;/span&gt;

&lt;span class="c1"&gt;// Vector with typed API&lt;/span&gt;
&lt;span class="n"&gt;DEFINE_VECTOR_TYPE&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;Stock&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;Stock&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="n"&gt;StockVector&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;vec&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;&lt;span class="w"&gt;                &lt;/span&gt;&lt;span class="c1"&gt;// vec.data, vec.size, vec.capacity&lt;/span&gt;
&lt;span class="n"&gt;Stock_vector_init&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="o"&gt;&amp;amp;&lt;/span&gt;&lt;span class="n"&gt;vec&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="n"&gt;Stock_vector_push&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="o"&gt;&amp;amp;&lt;/span&gt;&lt;span class="n"&gt;vec&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;my_stock&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="n"&gt;Stock_vector_free&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="o"&gt;&amp;amp;&lt;/span&gt;&lt;span class="n"&gt;vec&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;&lt;span class="w"&gt;        &lt;/span&gt;&lt;span class="c1"&gt;// or let AUTO_TYPED_VECTOR handle it&lt;/span&gt;
&lt;/pre&gt;&lt;/div&gt;
&lt;p&gt;The &lt;code&gt;AUTO_DataFrame&lt;/code&gt;, &lt;code&gt;AUTO_Dsl&lt;/code&gt;, &lt;code&gt;AUTO_DFQ_SCOPE&lt;/code&gt; macros you'll see later are all built on safe_c.h's &lt;code&gt;CLEANUP&lt;/code&gt; mechanism. Without it, every single error path would need manual &lt;code&gt;goto cleanup&lt;/code&gt; boilerplate. With it, the code looks almost as clean as Python.&lt;/p&gt;
&lt;h3 id=layer-2-crescent-the-dataframe-library&gt;Layer 2: Crescent ~ the DataFrame library&lt;/h3&gt;&lt;p&gt;&lt;code&gt;Crescent&lt;/code&gt; or &lt;code&gt;libvh_df&lt;/code&gt; is the core of this project. It's a pure-C DataFrame library (~10,000 lines of code) that implements:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Columnar storage (f32, f64, i32, i64, str, dictionary-encoded str)&lt;/li&gt;
&lt;li&gt;CSV reader (multi-threaded, RFC-4180 compliant, bump-arena allocation ~ zero heap per row)&lt;/li&gt;
&lt;li&gt;SIMD aggregations (sum, min, max with AVX2)&lt;/li&gt;
&lt;li&gt;GroupBy, aggregate, window rank, rolling windows&lt;/li&gt;
&lt;li&gt;Pivot tables, merge/join, melt&lt;/li&gt;
&lt;li&gt;Correlation matrix, z-score, binning (cut), describe, value counts&lt;/li&gt;
&lt;li&gt;String operations (contains, startswith, endswith, lower, upper)&lt;/li&gt;
&lt;li&gt;A chainable DSL for building pipelines&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;The DSL is the part that makes it usable. Here's a value-quality screen:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;span class="n"&gt;AUTO_DFQ_SCOPE&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;dfs&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;q&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;out&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="n"&gt;q&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;dfq_from_frame&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;base&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="n"&gt;dfq_query&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;q&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s"&gt;&amp;quot;per &amp;gt; 0 AND roe &amp;gt; 0.08 AND pbv &amp;gt; 0 AND der &amp;lt; 1.5 AND month &amp;gt; -0.10&amp;quot;&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="n"&gt;dfq_rank&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;q&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s"&gt;&amp;quot;sector&amp;quot;&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s"&gt;&amp;quot;per&amp;quot;&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s"&gt;&amp;quot;sector_per_rank&amp;quot;&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nb"&gt;true&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="n"&gt;dfq_sort_asc&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;q&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s"&gt;&amp;quot;per&amp;quot;&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="n"&gt;dfq_head&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;q&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;20&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="n"&gt;dfq_assign_mul_scalar&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;q&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s"&gt;&amp;quot;ROE%&amp;quot;&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;   &lt;/span&gt;&lt;span class="s"&gt;&amp;quot;roe&amp;quot;&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;   &lt;/span&gt;&lt;span class="mf"&gt;100.0f&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="n"&gt;dfq_assign_mul_scalar&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;q&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s"&gt;&amp;quot;Month%&amp;quot;&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s"&gt;&amp;quot;month&amp;quot;&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mf"&gt;100.0f&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="n"&gt;DFQ_SELECT&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;q&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s"&gt;&amp;quot;code&amp;quot;&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s"&gt;&amp;quot;stock&amp;quot;&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s"&gt;&amp;quot;sector&amp;quot;&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s"&gt;&amp;quot;per&amp;quot;&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s"&gt;&amp;quot;ROE%&amp;quot;&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s"&gt;&amp;quot;der&amp;quot;&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s"&gt;&amp;quot;Month%&amp;quot;&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s"&gt;&amp;quot;sector_per_rank&amp;quot;&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="n"&gt;DFQ_END&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;q&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;out&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;/pre&gt;&lt;/div&gt;
&lt;p&gt;And the equivalent in Pandas:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;span class="n"&gt;result&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="n"&gt;df&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;query&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s2"&gt;&amp;quot;PER &amp;gt; 0 and `ROE %` &amp;gt; 0.08 and PBV &amp;gt; 0 and DER &amp;lt; 1.5 and Month &amp;gt; -0.10&amp;quot;&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;assign&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
        &lt;span class="n"&gt;sector_per_rank&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="k"&gt;lambda&lt;/span&gt; &lt;span class="n"&gt;x&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;x&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;groupby&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s1"&gt;&amp;#39;Sector&amp;#39;&lt;/span&gt;&lt;span class="p"&gt;)[&lt;/span&gt;&lt;span class="s1"&gt;&amp;#39;PER&amp;#39;&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;rank&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;astype&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nb"&gt;int&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;
        &lt;span class="o"&gt;**&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;
            &lt;span class="s1"&gt;&amp;#39;ROE%&amp;#39;&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="k"&gt;lambda&lt;/span&gt; &lt;span class="n"&gt;x&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;x&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s1"&gt;&amp;#39;ROE %&amp;#39;&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="mi"&gt;100&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
            &lt;span class="s1"&gt;&amp;#39;Month%&amp;#39;&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="k"&gt;lambda&lt;/span&gt; &lt;span class="n"&gt;x&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;x&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s1"&gt;&amp;#39;Month&amp;#39;&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="mi"&gt;100&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="p"&gt;}&lt;/span&gt;
    &lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;sort_values&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s1"&gt;&amp;#39;PER&amp;#39;&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;head&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;20&lt;/span&gt;&lt;span class="p"&gt;)[[&lt;/span&gt;&lt;span class="s1"&gt;&amp;#39;Code&amp;#39;&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s1"&gt;&amp;#39;Stock&amp;#39;&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s1"&gt;&amp;#39;Sector&amp;#39;&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s1"&gt;&amp;#39;PER&amp;#39;&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s1"&gt;&amp;#39;ROE%&amp;#39;&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s1"&gt;&amp;#39;DER&amp;#39;&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s1"&gt;&amp;#39;Month%&amp;#39;&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s1"&gt;&amp;#39;sector_per_rank&amp;#39;&lt;/span&gt;&lt;span class="p"&gt;]]&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/pre&gt;&lt;/div&gt;
&lt;p&gt;You still need &lt;code&gt;AUTO_DFQ_SCOPE&lt;/code&gt;, &lt;code&gt;dfq_from_frame&lt;/code&gt;, and &lt;code&gt;DFQ_END&lt;/code&gt; around the edges. But the middle reads like Pandas. &lt;code&gt;dfq_assign_mul_scalar&lt;/code&gt; replaces the old &lt;code&gt;dfq_assign_scalar(q, &amp;quot;ROE%&amp;quot;, &amp;quot;roe&amp;quot;, VHDF_BINOP_MUL, 100.0f)&lt;/code&gt; ~ same work, none of the enum noise. Top to bottom, it flows like a method chain.&lt;/p&gt;
&lt;p&gt;The biggest friction point was arithmetic expressions. Before the current version, computing a momentum score looked like:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;span class="n"&gt;dfq_assign_scalar&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;q&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s"&gt;&amp;quot;_w_week&amp;quot;&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;  &lt;/span&gt;&lt;span class="s"&gt;&amp;quot;week&amp;quot;&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="n"&gt;VHDF_BINOP_MUL&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mf"&gt;0.10f&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="n"&gt;dfq_assign_scalar&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;q&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s"&gt;&amp;quot;_w_month&amp;quot;&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s"&gt;&amp;quot;month&amp;quot;&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;   &lt;/span&gt;&lt;span class="n"&gt;VHDF_BINOP_MUL&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mf"&gt;0.40f&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="n"&gt;dfq_assign_scalar&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;q&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s"&gt;&amp;quot;_w_3mo&amp;quot;&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;   &lt;/span&gt;&lt;span class="s"&gt;&amp;quot;f_3_0mo&amp;quot;&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;VHDF_BINOP_MUL&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mf"&gt;0.30f&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="n"&gt;dfq_assign_scalar&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;q&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s"&gt;&amp;quot;_w_ytd&amp;quot;&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;   &lt;/span&gt;&lt;span class="s"&gt;&amp;quot;ytd&amp;quot;&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;     &lt;/span&gt;&lt;span class="n"&gt;VHDF_BINOP_MUL&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mf"&gt;0.20f&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="n"&gt;dfq_assign&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;q&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s"&gt;&amp;quot;_t1&amp;quot;&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s"&gt;&amp;quot;_w_week&amp;quot;&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;  &lt;/span&gt;&lt;span class="n"&gt;VHDF_BINOP_ADD&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s"&gt;&amp;quot;_w_month&amp;quot;&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="n"&gt;dfq_assign&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;q&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s"&gt;&amp;quot;_t2&amp;quot;&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s"&gt;&amp;quot;_w_3mo&amp;quot;&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;   &lt;/span&gt;&lt;span class="n"&gt;VHDF_BINOP_ADD&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s"&gt;&amp;quot;_w_ytd&amp;quot;&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="n"&gt;dfq_assign&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;q&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s"&gt;&amp;quot;momentum_score&amp;quot;&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s"&gt;&amp;quot;_t1&amp;quot;&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;      &lt;/span&gt;&lt;span class="n"&gt;VHDF_BINOP_ADD&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s"&gt;&amp;quot;_t2&amp;quot;&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;/pre&gt;&lt;/div&gt;
&lt;p&gt;Seven calls for one formula. That's where I added &lt;code&gt;dfq_assign_expr&lt;/code&gt;, a tiny recursive-descent expression parser built into the DSL:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;span class="n"&gt;dfq_assign_expr&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;q&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s"&gt;&amp;quot;momentum_score&amp;quot;&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s"&gt;&amp;quot;week*0.10 + month*0.40 + f_3_0mo*0.30 + ytd*0.20&amp;quot;&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;/pre&gt;&lt;/div&gt;
&lt;p&gt;One call. The parser handles &lt;code&gt;+&lt;/code&gt;, &lt;code&gt;-&lt;/code&gt;, &lt;code&gt;*&lt;/code&gt;, &lt;code&gt;/&lt;/code&gt;, parentheses, and proper precedence. Under the hood it builds the same temporary columns as the manual version, then tears them down automatically. You never see the scaffolding.&lt;/p&gt;
&lt;p&gt;Beyond &lt;code&gt;dfq_assign_expr&lt;/code&gt;, the DSL now includes purpose-built shortcuts that a typical screener function is now six readable lines:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;span class="n"&gt;DataFrame&lt;/span&gt;&lt;span class="o"&gt;*&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nf"&gt;screen_safe_tech&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="k"&gt;const&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;DataFrame&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;*&lt;/span&gt;&lt;span class="n"&gt;base&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;
&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="n"&gt;AUTO_Dsl&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;q&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="n"&gt;q&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;dfq_from_frame&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;base&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="n"&gt;dfq_and_where_str_eq&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;q&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s"&gt;&amp;quot;sector&amp;quot;&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s"&gt;&amp;quot;Technology&amp;quot;&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="n"&gt;dfq_and_where_gt&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;q&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s"&gt;&amp;quot;roe&amp;quot;&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mf"&gt;0.15f&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="n"&gt;dfq_and_where_gt&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;q&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s"&gt;&amp;quot;npm&amp;quot;&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mf"&gt;0.10f&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="n"&gt;dfq_and_where_lt&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;q&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s"&gt;&amp;quot;der&amp;quot;&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mf"&gt;0.80f&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="k"&gt;return&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;vhdf_collect_or_null&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;q&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/pre&gt;&lt;/div&gt;
&lt;p&gt;Line by line, that reads like a Pandas boolean mask.&lt;/p&gt;
&lt;p&gt;The Pandas equivalent is:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;span class="n"&gt;df&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;
    &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;df&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s1"&gt;&amp;#39;Sector&amp;#39;&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="o"&gt;==&lt;/span&gt; &lt;span class="s1"&gt;&amp;#39;Technology&amp;#39;&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;&amp;amp;&lt;/span&gt; 
    &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;df&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s1"&gt;&amp;#39;ROE %&amp;#39;&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;&lt;/span&gt; &lt;span class="mf"&gt;0.15&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;&amp;amp;&lt;/span&gt; 
    &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;df&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s1"&gt;&amp;#39;NPM&amp;#39;&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;&lt;/span&gt; &lt;span class="mf"&gt;0.10&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;&amp;amp;&lt;/span&gt; 
    &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;df&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s1"&gt;&amp;#39;DER&amp;#39;&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="o"&gt;&amp;lt;&lt;/span&gt; &lt;span class="mf"&gt;0.80&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="p"&gt;]&lt;/span&gt;
&lt;/pre&gt;&lt;/div&gt;
&lt;h3 id=layer-3-cforge-the-build-system&gt;Layer 3: cforge ~ the build system&lt;/h3&gt;&lt;p&gt;&lt;a href='https://hwisnu.bearblog.dev/cforge-c-build-system-tooling-safe_ch/'&gt;Read more on cforge here&lt;/a&gt;
&lt;code&gt;cforge&lt;/code&gt; is part build tool, part code generator, and part linter. The important part for Crescent is that it removes the repetitive C glue around schemas, dataframe adapters, and reflection.&lt;/p&gt;
&lt;p&gt;Three commands handle everything:&lt;/p&gt;
&lt;h4 id=codecforge-gen-struct-ltcsvgt-ltnamegtcode&gt;&lt;code&gt;cforge gen-struct &amp;lt;csv&amp;gt; &amp;lt;Name&amp;gt;&lt;/code&gt;&lt;/h4&gt;&lt;p&gt;Reads your CSV file, samples ~100 rows, and infers C types for each column by examining actual values:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Integer within &lt;code&gt;i32&lt;/code&gt; range (~±2 billion) → &lt;code&gt;i32&lt;/code&gt;; larger → &lt;code&gt;i64&lt;/code&gt;; overflow → &lt;code&gt;char*&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;Float (contains &lt;code&gt;.&lt;/code&gt;) → &lt;code&gt;f32&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;&amp;quot;true&amp;quot;&lt;/code&gt;/&lt;code&gt;&amp;quot;false&amp;quot;&lt;/code&gt; → &lt;code&gt;bool&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;Everything else → &lt;code&gt;char*&lt;/code&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Type conflicts resolve by widening ~ int -&gt; float -&gt; string. Column names are sanitized: lowercase, non-alphanumeric characters replaced with &lt;code&gt;_&lt;/code&gt;, and C keyword collisions get an &lt;code&gt;f_&lt;/code&gt; prefix.&lt;/p&gt;
&lt;p&gt;Output: &lt;code&gt;include/&amp;lt;name&amp;gt;_auto.h&lt;/code&gt;, containing a typed C struct that matches the CSV plus the small helper macros and vector boilerplate needed to use it.&lt;/p&gt;
&lt;p&gt;The implementation template lives in &lt;code&gt;include/csv_to_struct_py.h&lt;/code&gt;, but the relevant part for day-to-day work is simple: point it at a CSV and you get a usable C struct instead of writing one by hand.&lt;/p&gt;
&lt;p&gt;After generation, &lt;code&gt;gen-struct&lt;/code&gt; automatically chains into &lt;code&gt;gen-df-col&lt;/code&gt; ~ you don't need to run both commands separately.&lt;/p&gt;
&lt;h4 id=codecforge-gen-df-col-ltnamegtcode&gt;&lt;code&gt;cforge gen-df-col &amp;lt;Name&amp;gt;&lt;/code&gt;&lt;/h4&gt;&lt;p&gt;Reads &lt;code&gt;include/&amp;lt;name&amp;gt;_auto.h&lt;/code&gt; and emits the dataframe adapter layer:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;code&gt;include/dataframe_col.h&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;src/dataframe_col.c&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;include/core.h&lt;/code&gt; + &lt;code&gt;src/core.c&lt;/code&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;This is the code that wires a typed row schema into Crescent's columnar frame representation and CSV ingest path. In practice it means I do not hand-write 50+ column append calls or schema setup code. The implementation template lives in &lt;code&gt;include/gen_df_col_py.h&lt;/code&gt;.&lt;/p&gt;
&lt;h4 id=codecforge-reflectcode&gt;&lt;code&gt;cforge reflect&lt;/code&gt;&lt;/h4&gt;&lt;p&gt;Generates the reflection and serialization glue into &lt;code&gt;generated_reflection.c&lt;/code&gt; and &lt;code&gt;generated_macros.h&lt;/code&gt;. That covers the boring support code around things like cloning, printing, CSV/JSON conversion, field metadata, and typed access helpers. The implementation template lives in &lt;code&gt;include/reflection_generator_py.h&lt;/code&gt;.&lt;/p&gt;
&lt;p&gt;Auto-generation does the heavy lifting for C. The schema glue, reflection helpers, and CSV integration are generated; the human writes the actual pipeline and application logic. That is a large part of why Crescent is practical to build in C at all.&lt;/p&gt;
&lt;h4 id=the-three-stage-build&gt;The Three-Stage Build&lt;/h4&gt;&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;cforge&lt;span class="w"&gt; &lt;/span&gt;build&lt;span class="w"&gt; &lt;/span&gt;main
&lt;/pre&gt;&lt;/div&gt;
&lt;p&gt;This generates a &lt;code&gt;build/dynamic_dev.mk&lt;/code&gt; Makefile and runs three stages:&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Stage 1 ~ Static Analysis:&lt;/strong&gt;&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;gcc -std=c23 -fanalyzer -Wall -Wextra -Wpedantic -Wconversion -Wshadow \
    -Wformat=2 -Wimplicit-fallthrough -D_POSIX_C_SOURCE=202308L
&lt;/pre&gt;&lt;/div&gt;
&lt;p&gt;The &lt;code&gt;-fanalyzer&lt;/code&gt; flag runs GCC's static analyzer on every compilation unit. Catches null dereferences, use-after-free, buffer overflows, and double-free at compile time. Objects go to &lt;code&gt;build/objs_ana/&lt;/code&gt;.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Stage 2 ~ Sanitizer:&lt;/strong&gt;&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;gcc -ggdb -fno-omit-frame-pointer -fsanitize=address,undefined
&lt;/pre&gt;&lt;/div&gt;
&lt;p&gt;Builds with AddressSanitizer (ASAN) and UndefinedBehaviorSanitizer (UBSAN). After linking, cforge &lt;strong&gt;executes the binary&lt;/strong&gt; with a 5-second timeout. If the sanitizer catches a heap buffer overflow, use-after-free, or integer overflow, the binary crashes with a stack trace pointing to the exact file and line. The release build does not proceed until the sanitizer passes. Objects go to &lt;code&gt;build/objs_san/&lt;/code&gt;.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Stage 3 ~ Release:&lt;/strong&gt;&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;gcc -O2 -march=native -D_FORTIFY_SOURCE=3 -fstack-protector-strong \
    -fstack-clash-protection -fcf-protection=full \
    -fstrict-flex-arrays=3 -fno-plt -fno-math-errno -fno-trapping-math \
    -fPIE -pie -Wl,-z,relro,-z,now
&lt;/pre&gt;&lt;/div&gt;
&lt;p&gt;Full hardened release: buffer overflow detection (&lt;code&gt;FORTIFY_SOURCE=3&lt;/code&gt;), stack canaries, stack-clash protection, control-flow integrity (CET shadow stack), hardened linker relocations (&lt;code&gt;-z relro,now&lt;/code&gt;), PIE for ASLR. Output: &lt;code&gt;build/main&lt;/code&gt;. Objects go to &lt;code&gt;build/objs_rel/&lt;/code&gt;.&lt;/p&gt;
&lt;p&gt;&lt;mark&gt;Note: the Release stage only use -O2, not -O3 as the most aggressive optimization&lt;/mark&gt;&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Incremental compilation&lt;/strong&gt; is handled by SHA-256 hashing: each &lt;code&gt;.c&lt;/code&gt; file gets its hash stored in &lt;code&gt;build/.hash_&amp;lt;path&amp;gt;&lt;/code&gt;. On the next run, if a source file's hash hasn't changed, its &lt;code&gt;.o&lt;/code&gt; is reused across all three stages. Changing a single file triggers recompilation of only that file ~ typically under 1 second. &lt;code&gt;-MMD -MP&lt;/code&gt; generates Make dependency files so header changes propagate correctly.&lt;/p&gt;
&lt;p&gt;Why all these hassle, you might ask? Correctness and safety is super important in finance (my main domain), hence I'm applying what's been the standard in libc++ and some more. You can &lt;a href='https://queue.acm.org/detail.cfm?id=3773097'&gt;read about libc++ here&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;Thousands of bugs squashed, 30% drop in baseline segfault rate, out-of-bounds access, UB-triggering precondition violation. All these at 0.3% performance cost at most. It's a no-brainer option, you'd be crazy if you don't use this.&lt;/p&gt;
&lt;h2 id=syntax-comparison-same-logic-three-languages&gt;Syntax Comparison: Same Logic, Three Languages&lt;/h2&gt;&lt;p&gt;Here's how the same operation looks in each:&lt;/p&gt;
&lt;h3 id=filter-sort-head&gt;Filter + Sort + Head&lt;/h3&gt;&lt;p&gt;&lt;strong&gt;Pandas:&lt;/strong&gt;&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;span class="n"&gt;result&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="n"&gt;df&lt;/span&gt;
    &lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;query&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s2"&gt;&amp;quot;PER &amp;gt; 0 and `ROE %` &amp;gt; 0.08 and PBV &amp;gt; 0 and DER &amp;lt; 1.5 and Month &amp;gt; -0.10&amp;quot;&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;assign&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;sector_per_rank&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="k"&gt;lambda&lt;/span&gt; &lt;span class="n"&gt;x&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;
        &lt;span class="n"&gt;x&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;groupby&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s1"&gt;&amp;#39;Sector&amp;#39;&lt;/span&gt;&lt;span class="p"&gt;)[&lt;/span&gt;&lt;span class="s1"&gt;&amp;#39;PER&amp;#39;&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
         &lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;rank&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
         &lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;astype&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nb"&gt;int&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="p"&gt;))&lt;/span&gt;
    &lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;sort_values&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s1"&gt;&amp;#39;PER&amp;#39;&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;head&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;20&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/pre&gt;&lt;/div&gt;
&lt;p&gt;&lt;strong&gt;Crescent (C DSL):&lt;/strong&gt;&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;span class="n"&gt;AUTO_DFQ_SCOPE&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;dfs&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;q&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;out&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="n"&gt;q&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;dfq_from_frame&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;base&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="n"&gt;dfq_query&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;q&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s"&gt;&amp;quot;per &amp;gt; 0 AND roe &amp;gt; 0.08 AND pbv &amp;gt; 0 AND der &amp;lt; 1.5 AND month &amp;gt; -0.10&amp;quot;&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="n"&gt;dfq_rank&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;q&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s"&gt;&amp;quot;sector&amp;quot;&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s"&gt;&amp;quot;per&amp;quot;&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s"&gt;&amp;quot;sector_per_rank&amp;quot;&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nb"&gt;true&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="n"&gt;dfq_sort_asc&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;q&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s"&gt;&amp;quot;per&amp;quot;&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="n"&gt;dfq_head&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;q&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;20&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="n"&gt;dfq_assign_mul_scalar&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;q&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s"&gt;&amp;quot;ROE%&amp;quot;&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;   &lt;/span&gt;&lt;span class="s"&gt;&amp;quot;roe&amp;quot;&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;   &lt;/span&gt;&lt;span class="mf"&gt;100.0f&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="n"&gt;dfq_assign_mul_scalar&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;q&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s"&gt;&amp;quot;Month%&amp;quot;&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s"&gt;&amp;quot;month&amp;quot;&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mf"&gt;100.0f&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="n"&gt;DFQ_SELECT&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;q&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s"&gt;&amp;quot;code&amp;quot;&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s"&gt;&amp;quot;stock&amp;quot;&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s"&gt;&amp;quot;sector&amp;quot;&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s"&gt;&amp;quot;per&amp;quot;&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s"&gt;&amp;quot;ROE%&amp;quot;&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s"&gt;&amp;quot;der&amp;quot;&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s"&gt;&amp;quot;Month%&amp;quot;&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s"&gt;&amp;quot;sector_per_rank&amp;quot;&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="n"&gt;DFQ_END&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;q&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;out&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;/pre&gt;&lt;/div&gt;
&lt;p&gt;If you want the performance-first form, Crescent also offers typed helpers that skip the string parser (dfq_query) entirely:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;span class="n"&gt;dfq_and_where_gt&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;q&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s"&gt;&amp;quot;per&amp;quot;&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mf"&gt;0.0f&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="n"&gt;dfq_and_where_gt&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;q&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s"&gt;&amp;quot;roe&amp;quot;&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mf"&gt;0.08f&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="n"&gt;dfq_and_where_gt&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;q&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s"&gt;&amp;quot;pbv&amp;quot;&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mf"&gt;0.0f&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="n"&gt;dfq_and_where_lt&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;q&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s"&gt;&amp;quot;der&amp;quot;&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mf"&gt;1.5f&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="n"&gt;dfq_and_where_gt&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;q&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s"&gt;&amp;quot;month&amp;quot;&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mf"&gt;-0.10f&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;/pre&gt;&lt;/div&gt;
&lt;p&gt;For pure numeric &lt;code&gt;AND&lt;/code&gt; filters both paths land on the same fused predicate-execution engine. The typed version avoids parsing, trimming, and token conversion, but on large frames the scan dominates and the gap is small. Use &lt;code&gt;dfq_query(...)&lt;/code&gt; when mirroring Pandas &lt;code&gt;.query(...)&lt;/code&gt; examples, and &lt;code&gt;dfq_and_where_*&lt;/code&gt; / &lt;code&gt;dfq_or_where_*&lt;/code&gt; when you want the typed C-native surface.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Polars via Rust:&lt;/strong&gt;&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;span class="w"&gt;  &lt;/span&gt;&lt;span class="kd"&gt;let&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;result&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;base&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;clone&lt;/span&gt;&lt;span class="p"&gt;().&lt;/span&gt;&lt;span class="n"&gt;lazy&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
&lt;span class="w"&gt;      &lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;filter&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
&lt;span class="w"&gt;          &lt;/span&gt;&lt;span class="n"&gt;col&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;&amp;quot;PER&amp;quot;&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="n"&gt;gt&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;lit&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mf"&gt;0.0&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt;
&lt;span class="w"&gt;              &lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;and&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;col&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;&amp;quot;ROE %&amp;quot;&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="n"&gt;gt&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;lit&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mf"&gt;0.08&lt;/span&gt;&lt;span class="p"&gt;)))&lt;/span&gt;
&lt;span class="w"&gt;              &lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;and&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;col&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;&amp;quot;PBV&amp;quot;&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="n"&gt;gt&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;lit&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mf"&gt;0.0&lt;/span&gt;&lt;span class="p"&gt;)))&lt;/span&gt;
&lt;span class="w"&gt;              &lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;and&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;col&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;&amp;quot;DER&amp;quot;&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="n"&gt;lt&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;lit&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mf"&gt;1.5&lt;/span&gt;&lt;span class="p"&gt;)))&lt;/span&gt;
&lt;span class="w"&gt;              &lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;and&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;col&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;&amp;quot;Month&amp;quot;&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="n"&gt;gt&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;lit&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="mf"&gt;0.10&lt;/span&gt;&lt;span class="p"&gt;))),&lt;/span&gt;
&lt;span class="w"&gt;      &lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="w"&gt;      &lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;with_columns&lt;/span&gt;&lt;span class="p"&gt;([&lt;/span&gt;
&lt;span class="w"&gt;          &lt;/span&gt;&lt;span class="n"&gt;col&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;&amp;quot;PER&amp;quot;&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="w"&gt;              &lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;rank&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
&lt;span class="w"&gt;                  &lt;/span&gt;&lt;span class="n"&gt;RankOptions&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;
&lt;span class="w"&gt;                      &lt;/span&gt;&lt;span class="n"&gt;method&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nc"&gt;RankMethod&lt;/span&gt;&lt;span class="p"&gt;::&lt;/span&gt;&lt;span class="n"&gt;Ordinal&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
&lt;span class="w"&gt;                      &lt;/span&gt;&lt;span class="n"&gt;descending&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nc"&gt;false&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
&lt;span class="w"&gt;                  &lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;
&lt;span class="w"&gt;                  &lt;/span&gt;&lt;span class="nb"&gt;None&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
&lt;span class="w"&gt;              &lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="w"&gt;              &lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;over&lt;/span&gt;&lt;span class="p"&gt;([&lt;/span&gt;&lt;span class="n"&gt;col&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;&amp;quot;Sector&amp;quot;&lt;/span&gt;&lt;span class="p"&gt;)])&lt;/span&gt;
&lt;span class="w"&gt;              &lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;alias&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;&amp;quot;sector_per_rank&amp;quot;&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;
&lt;span class="w"&gt;      &lt;/span&gt;&lt;span class="p"&gt;])&lt;/span&gt;
&lt;span class="w"&gt;      &lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;sort&lt;/span&gt;&lt;span class="p"&gt;([&lt;/span&gt;&lt;span class="s"&gt;&amp;quot;PER&amp;quot;&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nb"&gt;Default&lt;/span&gt;&lt;span class="p"&gt;::&lt;/span&gt;&lt;span class="n"&gt;default&lt;/span&gt;&lt;span class="p"&gt;())&lt;/span&gt;
&lt;span class="w"&gt;      &lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;limit&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;20&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="w"&gt;      &lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;collect&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;&lt;span class="o"&gt;?&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;/pre&gt;&lt;/div&gt;
&lt;h3 id=filter-top-k-csv-export&gt;Filter + Top-K + CSV export&lt;/h3&gt;&lt;p&gt;This is the same workflow as the "export top value picks" block in &lt;code&gt;valueHunter&lt;/code&gt;.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Pandas:&lt;/strong&gt;&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;span class="n"&gt;value_picks&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="n"&gt;df&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;df&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s1"&gt;&amp;#39;PER&amp;#39;&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
        &lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;between&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mf"&gt;0.01&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mf"&gt;15.0&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; 
        &lt;span class="o"&gt;&amp;amp;&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;df&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s1"&gt;&amp;#39;ROE %&amp;#39;&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;&lt;/span&gt; &lt;span class="mf"&gt;0.08&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; 
        &lt;span class="o"&gt;&amp;amp;&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;df&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s1"&gt;&amp;#39;DER&amp;#39;&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="o"&gt;&amp;lt;&lt;/span&gt; &lt;span class="mf"&gt;1.5&lt;/span&gt;&lt;span class="p"&gt;)]&lt;/span&gt;
    &lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;nsmallest&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;20&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s1"&gt;&amp;#39;PER&amp;#39;&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="n"&gt;value_picks&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;to_csv&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s1"&gt;&amp;#39;value_picks_pandas.csv&amp;#39;&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;index&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="kc"&gt;False&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/pre&gt;&lt;/div&gt;
&lt;p&gt;&lt;strong&gt;Crescent (C DSL):&lt;/strong&gt;&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;span class="n"&gt;AUTO_DFQ_SCOPE&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;dfs14&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;q14&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;export14&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="n"&gt;q14&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;dfq_from_frame&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;base&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="n"&gt;dfq_and_where_between&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;q14&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s"&gt;&amp;quot;per&amp;quot;&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mf"&gt;0.01f&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mf"&gt;15.0f&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="n"&gt;dfq_and_where_gt&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;q14&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s"&gt;&amp;quot;roe&amp;quot;&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mf"&gt;0.08f&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="n"&gt;dfq_and_where_lt&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;q14&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s"&gt;&amp;quot;der&amp;quot;&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mf"&gt;1.5f&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="n"&gt;dfq_nsmallest&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;q14&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;20&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s"&gt;&amp;quot;per&amp;quot;&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="n"&gt;DFQ_SELECT&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;q14&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s"&gt;&amp;quot;code&amp;quot;&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s"&gt;&amp;quot;stock&amp;quot;&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s"&gt;&amp;quot;sector&amp;quot;&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s"&gt;&amp;quot;per&amp;quot;&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s"&gt;&amp;quot;per_rank&amp;quot;&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s"&gt;&amp;quot;Momentum_Score&amp;quot;&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s"&gt;&amp;quot;roe&amp;quot;&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s"&gt;&amp;quot;der&amp;quot;&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="cm"&gt;/* Materialise once; use the frame for both CSV export and display. */&lt;/span&gt;
&lt;span class="n"&gt;DFQ_END&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;q14&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;export14&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="k"&gt;if&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;export14&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;
&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="k"&gt;if&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;vhdf_frame_to_csv&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;export14&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s"&gt;&amp;quot;value_picks.csv&amp;quot;&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt;
&lt;span class="w"&gt;        &lt;/span&gt;&lt;span class="n"&gt;printf&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;&amp;quot;&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="s"&gt;[Export] Wrote %zu value picks to value_picks.csv&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="s"&gt;&amp;quot;&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
&lt;span class="w"&gt;               &lt;/span&gt;&lt;span class="n"&gt;export14&lt;/span&gt;&lt;span class="o"&gt;-&amp;gt;&lt;/span&gt;&lt;span class="n"&gt;num_rows&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/pre&gt;&lt;/div&gt;
&lt;p&gt;&lt;strong&gt;Polars via Rust:&lt;/strong&gt;&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;span class="kd"&gt;let&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;value_picks&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;base&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;clone&lt;/span&gt;&lt;span class="p"&gt;().&lt;/span&gt;&lt;span class="n"&gt;lazy&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;filter&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
&lt;span class="w"&gt;        &lt;/span&gt;&lt;span class="n"&gt;col&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;&amp;quot;PER&amp;quot;&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="n"&gt;gt_eq&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;lit&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mf"&gt;0.01&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt;
&lt;span class="w"&gt;            &lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;and&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;col&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;&amp;quot;PER&amp;quot;&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="n"&gt;lt_eq&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;lit&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mf"&gt;15.0&lt;/span&gt;&lt;span class="p"&gt;)))&lt;/span&gt;
&lt;span class="w"&gt;            &lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;and&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;col&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;&amp;quot;ROE %&amp;quot;&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="n"&gt;gt&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;lit&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mf"&gt;0.08&lt;/span&gt;&lt;span class="p"&gt;)))&lt;/span&gt;
&lt;span class="w"&gt;            &lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;and&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;col&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;&amp;quot;DER&amp;quot;&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="n"&gt;lt&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;lit&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mf"&gt;1.5&lt;/span&gt;&lt;span class="p"&gt;))),&lt;/span&gt;
&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;sort&lt;/span&gt;&lt;span class="p"&gt;([&lt;/span&gt;&lt;span class="s"&gt;&amp;quot;PER&amp;quot;&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nb"&gt;Default&lt;/span&gt;&lt;span class="p"&gt;::&lt;/span&gt;&lt;span class="n"&gt;default&lt;/span&gt;&lt;span class="p"&gt;())&lt;/span&gt;
&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;limit&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;20&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;collect&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;&lt;span class="o"&gt;?&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="kd"&gt;let&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;mut&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;file&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;std&lt;/span&gt;&lt;span class="p"&gt;::&lt;/span&gt;&lt;span class="n"&gt;fs&lt;/span&gt;&lt;span class="p"&gt;::&lt;/span&gt;&lt;span class="n"&gt;File&lt;/span&gt;&lt;span class="p"&gt;::&lt;/span&gt;&lt;span class="n"&gt;create&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;&amp;quot;value_picks_polars.csv&amp;quot;&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;&lt;span class="o"&gt;?&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="n"&gt;CsvWriter&lt;/span&gt;&lt;span class="p"&gt;::&lt;/span&gt;&lt;span class="n"&gt;new&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="o"&gt;&amp;amp;&lt;/span&gt;&lt;span class="k"&gt;mut&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;file&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="n"&gt;finish&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="o"&gt;&amp;amp;&lt;/span&gt;&lt;span class="k"&gt;mut&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;value_picks&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;clone&lt;/span&gt;&lt;span class="p"&gt;())&lt;/span&gt;&lt;span class="o"&gt;?&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;/pre&gt;&lt;/div&gt;
&lt;p&gt;The performance story here is the same as the previous example: the typed &lt;code&gt;dfq_and_where_*&lt;/code&gt; filters are the faster C-side surface, because they go straight into the deferred/fused numeric predicate path. A &lt;code&gt;dfq_query(...)&lt;/code&gt; version would still be valid, but it would pay a small extra parsing cost up front for no gain in the actual filter execution.&lt;/p&gt;
&lt;p&gt;Pandas is the shortest. Crescent is explicit but still compact. Rust/Polars is serviceable, but once CSV export enters the picture the chain expands into extra builder and I/O ceremony.&lt;/p&gt;
&lt;h3 id=computed-column&gt;Computed Column&lt;/h3&gt;&lt;p&gt;&lt;strong&gt;Pandas:&lt;/strong&gt;&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;span class="n"&gt;df&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s1"&gt;&amp;#39;Momentum_Score&amp;#39;&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="n"&gt;df&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s1"&gt;&amp;#39;Week&amp;#39;&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="o"&gt;*&lt;/span&gt;&lt;span class="mf"&gt;0.10&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt; 
    &lt;span class="n"&gt;df&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s1"&gt;&amp;#39;Month&amp;#39;&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="o"&gt;*&lt;/span&gt;&lt;span class="mf"&gt;0.40&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt; 
    &lt;span class="n"&gt;df&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s1"&gt;&amp;#39;3.0Mo&amp;#39;&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="o"&gt;*&lt;/span&gt;&lt;span class="mf"&gt;0.30&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt; 
    &lt;span class="n"&gt;df&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s1"&gt;&amp;#39;YTD&amp;#39;&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="o"&gt;*&lt;/span&gt;&lt;span class="mf"&gt;0.20&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/pre&gt;&lt;/div&gt;
&lt;p&gt;&lt;strong&gt;Crescent (C DSL):&lt;/strong&gt;&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;span class="n"&gt;dfq_assign_expr&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;q&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s"&gt;&amp;quot;momentum_score&amp;quot;&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="s"&gt;&amp;quot;week*0.10 + month*0.40 + f_3_0mo*0.30 + ytd*0.20&amp;quot;&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;/pre&gt;&lt;/div&gt;
&lt;p&gt;One call. The parser handles &lt;code&gt;+&lt;/code&gt;, &lt;code&gt;-&lt;/code&gt;, &lt;code&gt;*&lt;/code&gt;, &lt;code&gt;/&lt;/code&gt;, parentheses, and proper precedence. Under the hood it builds the same temporary columns as the manual version, then tears them down automatically. You never see the scaffolding.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Polars via Rust:&lt;/strong&gt;&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;with_columns&lt;/span&gt;&lt;span class="p"&gt;([(&lt;/span&gt;
&lt;span class="w"&gt;      &lt;/span&gt;&lt;span class="n"&gt;col&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;&amp;quot;Week&amp;quot;&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;&lt;span class="w"&gt;  &lt;/span&gt;&lt;span class="o"&gt;*&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;lit&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mf"&gt;0.10&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="o"&gt;+&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;col&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;&amp;quot;Month&amp;quot;&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;*&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;lit&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mf"&gt;0.40&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="o"&gt;+&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;col&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;&amp;quot;3.0Mo&amp;quot;&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;*&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;lit&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mf"&gt;0.30&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="o"&gt;+&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;col&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;&amp;quot;YTD&amp;quot;&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;&lt;span class="w"&gt;   &lt;/span&gt;&lt;span class="o"&gt;*&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;lit&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mf"&gt;0.20&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="n"&gt;alias&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;&amp;quot;Momentum_Score&amp;quot;&lt;/span&gt;&lt;span class="p"&gt;)])&lt;/span&gt;
&lt;/pre&gt;&lt;/div&gt;
&lt;p&gt;Pandas and C are one-liners. Rust requires &lt;code&gt;lit()&lt;/code&gt; wrapping every scalar and explicit &lt;code&gt;.alias()&lt;/code&gt;.&lt;/p&gt;
&lt;h3 id=groupby-aggregate&gt;GroupBy + Aggregate&lt;/h3&gt;&lt;p&gt;&lt;strong&gt;Pandas:&lt;/strong&gt;&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;span class="n"&gt;df&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;groupby&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s1"&gt;&amp;#39;Sector&amp;#39;&lt;/span&gt;&lt;span class="p"&gt;)[&lt;/span&gt;&lt;span class="s1"&gt;&amp;#39;PER&amp;#39;&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;mean&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;rename&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s1"&gt;&amp;#39;avg_per&amp;#39;&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;reset_index&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
&lt;/pre&gt;&lt;/div&gt;
&lt;p&gt;&lt;strong&gt;Crescent (C DSL):&lt;/strong&gt;&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;span class="n"&gt;dfq_groupby&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;q&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s"&gt;&amp;quot;sector&amp;quot;&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="n"&gt;dfq_groupby_mean&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;q&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s"&gt;&amp;quot;per&amp;quot;&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s"&gt;&amp;quot;avg_per&amp;quot;&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;/pre&gt;&lt;/div&gt;
&lt;p&gt;&lt;strong&gt;Polars via Rust:&lt;/strong&gt;&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;group_by&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="fm"&gt;vec!&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;col&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;&amp;quot;Sector&amp;quot;&lt;/span&gt;&lt;span class="p"&gt;)]).&lt;/span&gt;&lt;span class="n"&gt;agg&lt;/span&gt;&lt;span class="p"&gt;([&lt;/span&gt;&lt;span class="n"&gt;col&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;&amp;quot;PER&amp;quot;&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="n"&gt;mean&lt;/span&gt;&lt;span class="p"&gt;().&lt;/span&gt;&lt;span class="n"&gt;alias&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;&amp;quot;avg_per&amp;quot;&lt;/span&gt;&lt;span class="p"&gt;)])&lt;/span&gt;
&lt;/pre&gt;&lt;/div&gt;
&lt;h3 id=isin-filter&gt;ISIN filter&lt;/h3&gt;&lt;p&gt;&lt;strong&gt;Pandas:&lt;/strong&gt;&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;span class="n"&gt;df&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;df&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Sector&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;isin&lt;/span&gt;&lt;span class="p"&gt;([&lt;/span&gt;&lt;span class="s1"&gt;&amp;#39;Finance&amp;#39;&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s1"&gt;&amp;#39;Technology&amp;#39;&lt;/span&gt;&lt;span class="p"&gt;])]&lt;/span&gt;
&lt;/pre&gt;&lt;/div&gt;
&lt;p&gt;&lt;strong&gt;Crescent (C DSL):&lt;/strong&gt;&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;span class="n"&gt;DFQ_ISIN&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;q&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s"&gt;&amp;quot;sector&amp;quot;&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s"&gt;&amp;quot;Finance&amp;quot;&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s"&gt;&amp;quot;Technology&amp;quot;&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;/pre&gt;&lt;/div&gt;
&lt;p&gt;&lt;strong&gt;Polars via Rust:&lt;/strong&gt;&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;filter&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;col&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;&amp;quot;Sector&amp;quot;&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="n"&gt;eq&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;lit&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;&amp;quot;Finance&amp;quot;&lt;/span&gt;&lt;span class="p"&gt;)).&lt;/span&gt;&lt;span class="n"&gt;or&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;col&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;&amp;quot;Sector&amp;quot;&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="n"&gt;eq&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;lit&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;&amp;quot;Technology&amp;quot;&lt;/span&gt;&lt;span class="p"&gt;))))&lt;/span&gt;
&lt;/pre&gt;&lt;/div&gt;
&lt;p&gt;No native &lt;code&gt;is_in&lt;/code&gt; for the C DSL ~ just a variadic macro. Rust/Polars requires chaining &lt;code&gt;.eq().or().eq()&lt;/code&gt; for simple membership checks.&lt;/p&gt;
&lt;h3 id=the-verdict-on-syntax&gt;The Verdict on Syntax&lt;/h3&gt;&lt;p&gt;Ranking each on compactness and clarity:
&lt;img src="https://i.postimg.cc/cHDcHTGw/crescent-03.png" alt="crescent-03" /&gt;&lt;/p&gt;
&lt;p&gt;Pandas is still the clearest and most compact. Fifteen years of refinement shows. But the C DSL has closed a lot of that gap.&lt;/p&gt;
&lt;p&gt;Crescent (C23) is now genuinely approachable. &lt;code&gt;dfq_and_where_gt(q, &amp;quot;roe&amp;quot;, 0.15f)&lt;/code&gt; reads like a Pandas boolean mask; &lt;code&gt;dfq_or_where_gt(...)&lt;/code&gt; gives the matching OR form when you need it. &lt;code&gt;dfq_groupby_mean(q, &amp;quot;per&amp;quot;, &amp;quot;avg_per&amp;quot;)&lt;/code&gt; self-documents the operation ~ no enum values to memorize. &lt;code&gt;DFQ_SORT(q, VHDF_DESC(&amp;quot;sector_score&amp;quot;), VHDF_DESC(&amp;quot;value_score&amp;quot;))&lt;/code&gt; reads as clearly as &lt;code&gt;.sort_values(by=['sector_score','value_score'], ascending=False)&lt;/code&gt;. The &lt;code&gt;VHDF_BINOP_*&lt;/code&gt; enums that used to litter every arithmetic call are now hidden behind purpose-built shortcuts.&lt;/p&gt;
&lt;p&gt;The remaining gap is the scaffolding: &lt;code&gt;AUTO_DFQ_SCOPE&lt;/code&gt;, &lt;code&gt;dfq_from_frame&lt;/code&gt;, &lt;code&gt;DFQ_END&lt;/code&gt; add ~3 lines per pipeline. And the mutable builder means you can't chain operations like Pandas ~ each is a separate statement. But once you know the RAII scope pattern (about 5 minutes of learning), the DSL reads top-to-bottom like a method chain.&lt;/p&gt;
&lt;p&gt;Rust/Polars is the most verbose. Every operation requires ceremony: &lt;code&gt;lit()&lt;/code&gt; wrapping every scalar, &lt;code&gt;RankOptions { method: RankMethod::Ordinal, descending: false }&lt;/code&gt; for a sort direction, &lt;code&gt;.clone().lazy()&lt;/code&gt; and &lt;code&gt;.collect()&lt;/code&gt; on every chain, &lt;code&gt;.alias()&lt;/code&gt; for every column rename. The type system does catch real bugs at compile time. But the API fights you on simple things. A rank call in C takes four positional arguments. In Rust it's a struct + enum + method chain. You'll need a few hours before it feels natural.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Bottom line:&lt;/strong&gt; Pandas is the most intuitive. The C DSL is the most surprising ~ it shouldn't be this readable for C, but it is. A Pandas developer can read a Crescent screener and understand every line without explanation. Rust/Polars gets the job done but makes you work for it.&lt;/p&gt;
&lt;h2 id=memory&gt;Memory&lt;/h2&gt;&lt;p&gt;Pandas uses about 110 MB for the small dataset (970 rows). Crescent uses 2-4 MB. The difference comes down to:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Bump arena allocation&lt;/strong&gt; ~ the CSV parser allocates one large arena (~128 KB stack buffer + linked slabs for overflow) and doles out memory linearly. No per-field &lt;code&gt;malloc&lt;/code&gt;. Freeing is a single arena reset.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Columnar storage&lt;/strong&gt; ~ float arrays are contiguous &lt;code&gt;float*&lt;/code&gt; blocks, not arrays of Python objects with refcounting overhead.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;No interpreter&lt;/strong&gt; ~ no Python object overhead per value (28+ bytes per Python float vs 4 bytes per C float).&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;No GC&lt;/strong&gt; ~ deterministic allocation, no collection pauses. The pattern is "allocate arena, process, free arena".&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id=where-the-speed-comes-from&gt;Where the speed comes from&lt;/h2&gt;&lt;p&gt;The ~49x speedup (vs Pandas) isn't from one thing. It's a stack of small wins:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Multi-threaded CSV parsing&lt;/strong&gt; ~ Crescent splits the file into chunks, parses each chunk on a separate thread, then merges. The CSV parser itself is RFC-4180 compliant with quote escaping, arena allocation, and zero heap allocations per field. Pandas parses CSV on a single thread.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Memory locality&lt;/strong&gt; ~ Columnar storage means all ROE values sit in a contiguous &lt;code&gt;float*&lt;/code&gt; array. A &lt;code&gt;filter(roe &amp;gt; 0.15)&lt;/code&gt; touches one cache line per 16 values. Row-oriented storage (Python list of tuples) scatters values across memory.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;No refcounting&lt;/strong&gt; ~ Every Python float access increments and decrements a reference count. Over 970 rows × 50 columns × multiple operations, this dominates the profile. Pandas user CPU is 2.48 s for 0.59 s wall time ~ it's saturating over 4 cores with refcounting and GC overhead.&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;h2 id=rustpolars-the-blazingly-mediocre-part&gt;Rust/Polars: The Blazingly Mediocre Part&lt;/h2&gt;&lt;p&gt;After seeing Crescent hit 12 ms, I thought: "Okay, but what if we just use Polars from Rust directly? No Python, no GIL, no FFI ~ pure native speed." I rewrote the entire screener in Rust using &lt;code&gt;polars&lt;/code&gt; 0.53. Identical output, 55 columns, same data to the last decimal.&lt;/p&gt;
&lt;p&gt;Here's the full picture across all data sizes:
&lt;img src="https://i.postimg.cc/JnTQnKCb/crescent-04.png" alt="crescent-04" /&gt;&lt;/p&gt;
&lt;p&gt;Rust/Polars loses at &lt;strong&gt;every&lt;/strong&gt; data size. There is no crossover point where it catches up. The professional, native-compiled, battle-tested DataFrame library loses to a side-project C program on small data (10.8x), medium data (2.2x), and large data (3.3x).&lt;/p&gt;
&lt;p&gt;The resource comparison at 1.9M rows:
&lt;img src="https://i.postimg.cc/4d81dwRh/crescent-05.png" alt="crescent-05" /&gt;&lt;/p&gt;
&lt;p&gt;The numbers are clear: at 970 rows Crescent finishes in &lt;strong&gt;12 ms&lt;/strong&gt; while Rust/Polars takes &lt;strong&gt;130 ms&lt;/strong&gt;. At 1.9M rows it's &lt;strong&gt;4.2 s&lt;/strong&gt; versus &lt;strong&gt;13.81 s&lt;/strong&gt;. Rust is not the bottleneck here ~ the architecture is.&lt;/p&gt;
&lt;p&gt;Crescent stores floats as plain &lt;code&gt;float*&lt;/code&gt;: a pointer and a length. No Arrow buffers, no null bitmaps, no offset tables, no buffer metadata. When a filter says &lt;code&gt;roe &amp;gt; 0.15&lt;/code&gt;, it generates a tight loop over a contiguous array. The compiler vectorizes it, the prefetcher keeps up, and the result is a stream of indices. CPU usage stays low: &lt;strong&gt;107%&lt;/strong&gt; at small data, &lt;strong&gt;505%&lt;/strong&gt; at 1.9M rows.&lt;/p&gt;
&lt;p&gt;Polars uses Arrow-backed columns where every access traverses buffer metadata, null bitmaps, and offset tables. For 970 rows, the query optimizer's cost model alone takes more time than the actual filter work. Polars CPU usage hits &lt;strong&gt;301%&lt;/strong&gt; on small data and &lt;strong&gt;804%&lt;/strong&gt; at 1.9M rows ~ &lt;mark&gt;nearly 3x more cores on average for a slower result&lt;/mark&gt;. The abstractions are sound in general, but at this scale they add indirection without delivering compensating speed.&lt;/p&gt;
&lt;h3 id=what-went-wrong&gt;What went wrong?&lt;/h3&gt;&lt;p&gt;Polars brought a query optimizer, Arrow buffers, and a thread pool to a street fight. For 970 rows, the optimizer's cost model takes more time than the actual filter. For 1.9M rows, Arrow's buffer metadata, null bitmaps, and offset tables add indirection that C's raw &lt;code&gt;float*&lt;/code&gt; arrays don't have.&lt;/p&gt;
&lt;p&gt;Crescent stores floats as &lt;code&gt;float*&lt;/code&gt;. No bitmaps, no metadata, no indirection. Just a pointer and a length. When a filter says &lt;code&gt;roe &amp;gt; 0.15&lt;/code&gt;, it generates a loop over a contiguous float array. No null checks, no offset calculation, no buffer traversal. The compiler vectorizes it, the prefetcher keeps up, and the result is a stream of passing indices.&lt;/p&gt;
&lt;p&gt;Polars' Arrow-backed columns add indirection at every access. For small data, the overhead is negligible. For medium data, it starts to add up. For large data, the optimizer and parallelism usually compensate ~ but not enough to catch Crescent.&lt;/p&gt;
&lt;p&gt;Why no benchmark with larger dataset? Actually it was Polars in Rust that stopped me doing that, since using the 866MB csv file stutters my system heavily, up to the point I need to close down my other apps / browsers...quite embarrassing if you ask me, Crescent in C23 did not have this issue, I can use my system just fine while doing the benches. Now imagine what a 5-10GB file would do to Polars in Rust.&lt;/p&gt;
&lt;p&gt;I often do benches in both quiet and noisy system, in my use case the noisy test is more relevant since I'm using my machine not only for writing programs, but also monitoring the stock markets, entering algorithm trades, running several data scraping bots. Here's what I wrote in my cgrep article:&lt;/p&gt;
&lt;p&gt;&lt;img src="https://i.postimg.cc/wjQZBTSW/noisy-bench.png" alt="noisy-bench" /&gt;&lt;/p&gt;
&lt;p&gt;So, do you close your other apps when you're running a program? Or do you keep other apps running? Exactly my point! Too many benchmarks do not consider the durability of a program, I think controlled noisy bench should be the normal.&lt;/p&gt;
&lt;h3 id=is-rustpolars-ever-faster&gt;Is Rust/Polars ever faster?&lt;/h3&gt;&lt;p&gt;At no data size tested (430 KB to 866 MB) did Rust/Polars beat Crescent. The gap narrows at 10 MB (2.2x) but widens again at larger sizes (3.3x at 1.9M rows) as Crescent's multi-threaded operations scale with data.&lt;/p&gt;
&lt;h2 id=under-the-hood-why-crescents-engine-is-faster&gt;Under the Hood: Why Crescent's engine is faster&lt;/h2&gt;&lt;p&gt;Let's do a deep dive into Crescent's design, you'll see why putting time into application architecture is super worth it. &lt;mark&gt;Crescent wins because the execution engine is tuned for a very specific shape of workload&lt;/mark&gt;: dense numeric columns, repeated fixed-schema pipelines, low-cardinality categorical strings, and a DSL that carries row selections around as index views instead of copying whole frames after every step.&lt;/p&gt;
&lt;p&gt;Polars is solving a broader problem. It supports Arrow semantics, null bitmaps, chunked arrays, lazy planning, type coercion, streaming execution, and a much more general optimizer surface. Crescent solves a narrower problem and exploits that narrowness aggressively.&lt;/p&gt;
&lt;p&gt;How the Crescent engine does it:&lt;/p&gt;
&lt;h3 id=1-the-dsl-is-view-first-not-frame-first&gt;1. The DSL is view-first, not frame-first&lt;/h3&gt;&lt;p&gt;&lt;code&gt;dfq_from_frame()&lt;/code&gt; does &lt;strong&gt;not&lt;/strong&gt; clone the input frame. It starts with:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;code&gt;source = frame&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;view = NULL&lt;/code&gt; meaning "full-range sentinel"&lt;/li&gt;
&lt;li&gt;&lt;code&gt;view_n = frame-&amp;gt;num_rows&lt;/code&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Most pipeline steps operate on an index view, not on copied columns. A filter is usually just shrinking a &lt;code&gt;size_t&lt;/code&gt; row-index vector.&lt;/p&gt;
&lt;p&gt;The key mechanism is deferred predicate fusion:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;code&gt;dfq_and_where_gt()&lt;/code&gt; / &lt;code&gt;dfq_and_where_lt()&lt;/code&gt; end up in &lt;code&gt;dfq_filter()&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;dfq_filter()&lt;/code&gt; does not execute immediately; it appends a &lt;code&gt;PendingPred&lt;/code&gt; into &lt;code&gt;dsl-&amp;gt;pending_preds&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;dfq_flush_pending()&lt;/code&gt; executes the entire &lt;code&gt;AND&lt;/code&gt; chain in one fused pass&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;So this:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;span class="n"&gt;dfq_and_where_gt&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;q&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s"&gt;&amp;quot;per&amp;quot;&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mf"&gt;0.0f&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="n"&gt;dfq_and_where_gt&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;q&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s"&gt;&amp;quot;roe&amp;quot;&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mf"&gt;0.08f&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="n"&gt;dfq_and_where_lt&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;q&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s"&gt;&amp;quot;der&amp;quot;&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mf"&gt;1.5f&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;/pre&gt;&lt;/div&gt;
&lt;p&gt;does not run three full dataframe filters. It compiles into one tight kernel over the active row set. Each worker gets a pointer to the source frame, the current view, a slice of rows, the full &lt;code&gt;PendingPred[]&lt;/code&gt; array, and a local output buffer of passing indices. &lt;code&gt;fused_filter_worker()&lt;/code&gt; evaluates all predicates for each row before deciding whether to push that row index. The engine only allocates the final surviving row-index vector once.&lt;/p&gt;
&lt;p&gt;This is a very different cost model from "evaluate expression node A, materialize, then B, materialize, then C."&lt;/p&gt;
&lt;h3 id=2-projection-stays-shallow-for-as-long-as-possible&gt;2. Projection stays shallow for as long as possible&lt;/h3&gt;&lt;p&gt;Older dataframe libraries often lose performance by turning metadata operations into data copies. Crescent avoids that in the fast path: &lt;code&gt;vhdf_select_columns()&lt;/code&gt; aliases columns instead of copying them. &lt;code&gt;vhdf_drop_columns()&lt;/code&gt; aliases the kept columns. &lt;code&gt;vhdf_rename_columns()&lt;/code&gt; aliases the same backing storage with a different column name.&lt;/p&gt;
&lt;p&gt;The implementation uses refcounted column aliasing through &lt;code&gt;vhdf_column_alias()&lt;/code&gt;, &lt;code&gt;shared_from&lt;/code&gt;, and &lt;code&gt;ref_count&lt;/code&gt;. In practice, &lt;code&gt;select&lt;/code&gt;, &lt;code&gt;drop&lt;/code&gt;, and &lt;code&gt;rename&lt;/code&gt; are mostly metadata edits plus a reference-count bump.&lt;/p&gt;
&lt;p&gt;The DSL's projection pushdown keeps that advantage alive. &lt;code&gt;dfq_select()&lt;/code&gt; can project the source first and only apply the row view afterward through &lt;code&gt;dfq_project_source_view()&lt;/code&gt;. If the final report only prints 6 columns, Crescent tries hard not to drag 55 columns through the last materialization step.&lt;/p&gt;
&lt;p&gt;This is one of the biggest reasons the &lt;code&gt;valueHunter&lt;/code&gt; pipelines (Crescent's project implementation) stay fast.&lt;/p&gt;
&lt;h3 id=3-materialization-is-index-gather-and-it-is-parallel&gt;3. Materialization is index-gather, and it is parallel&lt;/h3&gt;&lt;p&gt;Eventually some operations do need a real frame. When that happens, Crescent materializes through &lt;code&gt;vhdf_frame_take_indices()&lt;/code&gt;.&lt;/p&gt;
&lt;p&gt;It avoids row-by-row append loops. Instead it works in two phases: allocate destination columns up front at the exact required row count, then gather selected rows into those columns. The gather parallelizes across columns when the problem size is large enough.&lt;/p&gt;
&lt;p&gt;For numeric columns on x86, the gather path uses AVX2 helpers where available. For large outputs, Crescent spreads the gather work over multiple threads. For string columns it gathers pointers, not heap-allocated string objects. The destination frame retains the source &lt;code&gt;StringArena&lt;/code&gt;, so the gather does not duplicate string payloads.&lt;/p&gt;
&lt;p&gt;This makes "filter -&gt; collect" much cheaper than a generic deep-copy materializer.&lt;/p&gt;
&lt;h3 id=4-top-k-avoids-full-sort-whenever-the-query-only-needs-top-k&gt;4. Top-k avoids full sort whenever the query only needs top-k&lt;/h3&gt;&lt;p&gt;This is a big one in finance workloads.&lt;/p&gt;
&lt;p&gt;If the pipeline asks for:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre&gt;&lt;span&gt;&lt;/span&gt;&lt;span class="n"&gt;dfq_nsmallest&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;q&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;20&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s"&gt;&amp;quot;per&amp;quot;&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;/pre&gt;&lt;/div&gt;
&lt;p&gt;Crescent does not sort the full active dataset unless it has to. &lt;code&gt;dfq_nsmallest()&lt;/code&gt; goes through &lt;code&gt;dfq_topk()&lt;/code&gt;:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;build &lt;code&gt;(value, original_idx)&lt;/code&gt; pairs&lt;/li&gt;
&lt;li&gt;do multi-threaded pair extraction for large views&lt;/li&gt;
&lt;li&gt;run &lt;code&gt;vhdf_select_topk()&lt;/code&gt; to partition/select the best &lt;code&gt;k&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;sort only those &lt;code&gt;k&lt;/code&gt; pairs for final presentation&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;So the cost is closer to &lt;code&gt;O(n)&lt;/code&gt; selection plus &lt;code&gt;O(k log k)&lt;/code&gt; final ordering, not &lt;code&gt;O(n log n)&lt;/code&gt; full sort.&lt;/p&gt;
&lt;p&gt;This is the right trade for screener-style outputs where the user wants "best 20 by PER", not "fully sorted 1.9 million row frame."&lt;/p&gt;
&lt;h3 id=5-string-handling-is-designed-around-pointer-identity-and-dictionary-codes&gt;5. String handling is designed around pointer identity and dictionary codes&lt;/h3&gt;&lt;p&gt;Crescent keeps string overhead low by treating them as interned pointers rather than general heap objects.&lt;/p&gt;
&lt;p&gt;During CSV ingest, strings go into a &lt;code&gt;StringArena&lt;/code&gt;. Two things happen: repeated equal strings often collapse to the same pointer, and string lifetime becomes arena lifetime instead of per-cell lifetime. This lets several operations use pointer identity as a fast-path before falling back to &lt;code&gt;strcmp&lt;/code&gt;.&lt;/p&gt;
&lt;p&gt;After ingest, &lt;code&gt;vhdf_frame_auto_dict_encode()&lt;/code&gt; can convert low-cardinality string columns from &lt;code&gt;VHDF_COL_STR&lt;/code&gt; to &lt;code&gt;VHDF_COL_DICT_STR&lt;/code&gt;:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;one &lt;code&gt;u32 *codes&lt;/code&gt; array per column&lt;/li&gt;
&lt;li&gt;one &lt;code&gt;const char **dict&lt;/code&gt; array of unique values&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;From there &lt;code&gt;sector == &amp;quot;Technology&amp;quot;&lt;/code&gt; becomes integer-code comparison. &lt;code&gt;isin&lt;/code&gt; over categorical columns becomes code-set membership. Group keys become much cheaper to hash and compare.&lt;/p&gt;
&lt;p&gt;For &lt;code&gt;valueHunter&lt;/code&gt;, columns like &lt;code&gt;sector&lt;/code&gt;, &lt;code&gt;industry&lt;/code&gt;, and &lt;code&gt;mc_class&lt;/code&gt; are exactly the shape where this pays off.&lt;/p&gt;
&lt;h3 id=6-csv-ingestion-is-built-for-fixed-schema-throughput-not-generic-row-objects&gt;6. CSV ingestion is built for fixed-schema throughput, not generic row objects&lt;/h3&gt;&lt;p&gt;The CSV path is one of Crescent's least glamorous but most important wins.&lt;/p&gt;
&lt;p&gt;The engine does several things that matter: &lt;code&gt;pread()&lt;/code&gt;-based chunking instead of a mutex-serialized &lt;code&gt;read()&lt;/code&gt; loop, pre-planned chunk boundaries before worker execution, quote-free fast path with edge-only boundary scanning, SIMD helpers for record boundary detection and comma splitting, 2 MiB-aligned worker buffers, one &lt;code&gt;StringArena&lt;/code&gt; per worker reused across chunks, and row-count hints pushed into the schema adapter so columns can reserve near-final capacity immediately.&lt;/p&gt;
&lt;p&gt;The field-adapter path means the parser writes directly into dataframe columns. There is no intermediate &lt;code&gt;struct Row&lt;/code&gt;, no boxed scalar representation, and no "parse into temporary rows, then convert rows into columns" phase.&lt;/p&gt;
&lt;p&gt;This is a major reason Crescent wins both wall time and RSS on the large CSV runs.&lt;/p&gt;
&lt;h3 id=7-the-numeric-representation-is-deliberately-narrow&gt;7. The numeric representation is deliberately narrow&lt;/h3&gt;&lt;p&gt;Most of the stock-screener numeric columns are stored as &lt;code&gt;f32&lt;/code&gt;, not &lt;code&gt;f64&lt;/code&gt;.&lt;/p&gt;
&lt;p&gt;So: half the bandwidth, twice as many values per cache line, lower gather/scatter cost, and lower RSS pressure through the whole pipeline.&lt;/p&gt;
&lt;p&gt;I made this choice because the domain allows it. For this workload, single-precision is enough. Crescent takes the win instead of paying for double precision everywhere "just in case."&lt;/p&gt;
&lt;p&gt;This compounds with the other choices: contiguous &lt;code&gt;float *&lt;/code&gt;, no Python object headers, no refcounts, no validity bitmap checks in the dense fast path, and fewer bytes touched per row during filter, rank, sort, and top-k.&lt;/p&gt;
&lt;h3 id=8-simd-is-used-where-the-loops-are-actually-hot&gt;8. SIMD is used where the loops are actually hot&lt;/h3&gt;&lt;p&gt;There is more SIMD in Crescent than the earlier summary makes obvious.&lt;/p&gt;
&lt;p&gt;I did not try to vectorize everything. I targeted the loops that show up over and over in dataframe workloads:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;numeric reductions&lt;/li&gt;
&lt;li&gt;numeric filters&lt;/li&gt;
&lt;li&gt;element-wise arithmetic&lt;/li&gt;
&lt;li&gt;correlation&lt;/li&gt;
&lt;li&gt;gather/materialization&lt;/li&gt;
&lt;li&gt;CSV structural scanning&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;On x86, the implementation uses runtime dispatch with &lt;code&gt;__builtin_cpu_supports(...)&lt;/code&gt; and function-level &lt;code&gt;__attribute__((target(...)))&lt;/code&gt; specializations. That means the same binary can:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;run AVX2 kernels when the CPU has AVX2&lt;/li&gt;
&lt;li&gt;use AVX2+FMA for correlation&lt;/li&gt;
&lt;li&gt;use AVX-512 for parts of the CSV scanner when available&lt;/li&gt;
&lt;li&gt;fall back to scalar code everywhere else&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;So the fast path is opportunistic, not mandatory.&lt;/p&gt;
&lt;h4 id=reductions&gt;Reductions&lt;/h4&gt;&lt;p&gt;The obvious starting point is reductions:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;code&gt;vhdf_sum_f32_avx2()&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;vhdf_min_f32_avx2()&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;vhdf_max_f32_avx2()&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;vhdf_sum_i32_avx2()&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;vhdf_min_i32_avx2()&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;vhdf_max_i32_avx2()&lt;/code&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;These work directly on contiguous typed arrays:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;8 &lt;code&gt;f32&lt;/code&gt; lanes at a time with &lt;code&gt;__m256&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;8 &lt;code&gt;i32&lt;/code&gt; lanes at a time with &lt;code&gt;__m256i&lt;/code&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;For &lt;code&gt;i32&lt;/code&gt; sum, Crescent widens to &lt;code&gt;i64&lt;/code&gt; immediately with &lt;code&gt;_mm256_cvtepi32_epi64()&lt;/code&gt;, so the SIMD accumulator does not overflow on long scans. That detail matters. It is the difference between a benchmark kernel and something I can use in a real dataframe engine.&lt;/p&gt;
&lt;h4 id=filters&gt;Filters&lt;/h4&gt;&lt;p&gt;The SIMD filter path is more important than the reduction path.&lt;/p&gt;
&lt;p&gt;The kernels:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;code&gt;filter_f32_avx2()&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;filter_f64_avx2()&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;filter_i32_avx2()&lt;/code&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;do not build a boolean mask array and compact it later. They:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;load a vector of values&lt;/li&gt;
&lt;li&gt;compare against a broadcast threshold&lt;/li&gt;
&lt;li&gt;turn the compare result into a bitmask with &lt;code&gt;movemask&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;walk the set bits with &lt;code&gt;ctz&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;write passing row indices directly into the output index buffer&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;That matches Crescent's engine design perfectly. The natural output of a filter in Crescent is not another column or another bitmap. It is a compact row-index list that the DSL can carry forward as the active view.&lt;/p&gt;
&lt;p&gt;So the SIMD path is doing two useful things at once:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;faster comparisons per iteration&lt;/li&gt;
&lt;li&gt;no extra pass to convert booleans into row indices&lt;/li&gt;
&lt;/ul&gt;
&lt;h4 id=element-wise-arithmetic&gt;Element-wise arithmetic&lt;/h4&gt;&lt;p&gt;The same pattern shows up in the arithmetic helpers:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;code&gt;vhdf_scalar_op_f32_avx2()&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;vhdf_binary_op_f32_avx2()&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;vhdf_clip_f32_avx2()&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;vhdf_diff1_f32_avx2()&lt;/code&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;These are the kernels behind operations like:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;multiply a column by a scalar&lt;/li&gt;
&lt;li&gt;add or subtract two columns&lt;/li&gt;
&lt;li&gt;compute first differences&lt;/li&gt;
&lt;li&gt;clip values into a range&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Because the numeric columns are plain &lt;code&gt;float *&lt;/code&gt; arrays, the loop is just "load eight floats, apply the op, store eight floats." There is no per-element dispatch and no object layer in the middle.&lt;/p&gt;
&lt;h4 id=correlation-uses-fma&gt;Correlation uses FMA&lt;/h4&gt;&lt;p&gt;The correlation path is one of the more technical SIMD kernels.&lt;/p&gt;
&lt;p&gt;Crescent has a dense &lt;code&gt;f32&lt;/code&gt; fast path:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;code&gt;has_nan_f32_avx2()&lt;/code&gt; first checks whether either input contains NaNs&lt;/li&gt;
&lt;li&gt;if not, &lt;code&gt;vhdf_col_corr_f32_fma()&lt;/code&gt; computes Pearson correlation with AVX2+FMA&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;The implementation is split deliberately:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;the means are accumulated with widened &lt;code&gt;f64&lt;/code&gt; sums&lt;/li&gt;
&lt;li&gt;covariance and variance are accumulated with &lt;code&gt;_mm256_fmadd_ps&lt;/code&gt;&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;So the hot inner loop gets fused multiply-add while the mean computation keeps better numerical behavior than naive &lt;code&gt;f32&lt;/code&gt; accumulation.&lt;/p&gt;
&lt;p&gt;That is not a general-purpose linear algebra engine. It is a targeted statistics kernel for the dataframe operations I actually run.&lt;/p&gt;
&lt;h4 id=gather-is-vectorized-too&gt;Gather is vectorized too&lt;/h4&gt;&lt;p&gt;One easy thing to miss is that Crescent also vectorizes part of materialization.&lt;/p&gt;
&lt;p&gt;When a filtered frame finally has to become a real dense frame, &lt;code&gt;vhdf_frame_take_indices()&lt;/code&gt; uses:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;code&gt;gather_f32_avx2()&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;gather_f64_avx2()&lt;/code&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;under the hood for numeric columns, via &lt;code&gt;_mm256_i32gather_ps()&lt;/code&gt; and &lt;code&gt;_mm256_i32gather_pd()&lt;/code&gt;.&lt;/p&gt;
&lt;p&gt;So Crescent is not only fast at contiguous scans. It also accelerates the next bottleneck that usually appears after filtering gets cheap: gathering selected rows back into dense columns.&lt;/p&gt;
&lt;h4 id=the-csv-parser-uses-simd-too&gt;The CSV parser uses SIMD too&lt;/h4&gt;&lt;p&gt;Some of the CSV speedup is from multithreading and arenas, but not all of it. The parser also uses AVX2 and AVX-512 for structural scanning:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;code&gt;vhdf_mem_has_byte_*()&lt;/code&gt; checks whether a chunk contains quotes or delimiters&lt;/li&gt;
&lt;li&gt;&lt;code&gt;vhdf_find_next_newline_*()&lt;/code&gt; finds newlines quickly&lt;/li&gt;
&lt;li&gt;&lt;code&gt;vhdf_csv_record_len_structural_*()&lt;/code&gt; finds record boundaries while respecting quoted newlines&lt;/li&gt;
&lt;li&gt;&lt;code&gt;vhdf_csv_split_fields_noquote_*()&lt;/code&gt; finds comma positions in quote-free rows&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;These kernels scan 32 or 64 bytes at a time, build masks, then use bit scans to locate the interesting byte positions.&lt;/p&gt;
&lt;p&gt;That is exactly the kind of work SIMD is good at:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;delimiter detection&lt;/li&gt;
&lt;li&gt;quote detection&lt;/li&gt;
&lt;li&gt;structural scanning before the parser drops into scalar cleanup logic&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;So when Crescent wins on CSV load time, it is not just "threads plus arenas." It is also using vector instructions to reduce the byte-scanning cost before field conversion even starts.&lt;/p&gt;
&lt;h4 id=why-simd-pays-off-here&gt;Why SIMD pays off here&lt;/h4&gt;&lt;p&gt;SIMD only helps if the surrounding engine lets it stay close to the real bottleneck.&lt;/p&gt;
&lt;p&gt;Crescent's layout makes that possible:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;dense homogeneous arrays&lt;/li&gt;
&lt;li&gt;low per-element metadata cost&lt;/li&gt;
&lt;li&gt;no validity bitmap in the common dense path&lt;/li&gt;
&lt;li&gt;row-index outputs from filters instead of heavyweight intermediate objects&lt;/li&gt;
&lt;li&gt;fewer abstraction layers between the DSL call and the hot loop&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;That means the vector kernels are doing actual data work instead of spending half their time navigating metadata.&lt;/p&gt;
&lt;h3 id=9-threading-is-conservative-on-small-data-and-aggressive-on-large-data&gt;9. Threading is conservative on small data and aggressive on large data&lt;/h3&gt;&lt;p&gt;One reason Crescent does so well at 970 rows is that it avoids acting like a distributed systems project for a toy dataset.&lt;/p&gt;
&lt;p&gt;Examples:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;code&gt;dfq_flush_pending()&lt;/code&gt; stays single-threaded below 4096 active rows&lt;/li&gt;
&lt;li&gt;gather materialization only goes multi-threaded when &lt;code&gt;num_columns * num_rows&lt;/code&gt; crosses a threshold&lt;/li&gt;
&lt;li&gt;sort parallelism is only enabled for large enough views&lt;/li&gt;
&lt;li&gt;&lt;code&gt;dfq_from_frame()&lt;/code&gt; scales the compute thread count from the in-memory frame size instead of blindly using all cores&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;So Crescent avoids paying thread-pool and scheduling overhead when the data is tiny, but it still fans out once the frame is large enough for the extra coordination to amortize. This design choice is often forgotten, amateurish mindset would instead do "put the pedal to the metal" mentality ~ meaning using every single cores available in a system.&lt;/p&gt;
&lt;p&gt;This is exactly how my cgrep managed to beat ripgrep using much less resources (especially RAM). &lt;a href='https://hwisnu.bearblog.dev/building-cgrep-using-safe_ch-custom-header-new/'&gt;You can read about cgrep here&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;This explains a lot of why the 970-row case is absurdly fast while the 1.9M-row case still scales.&lt;/p&gt;
&lt;h3 id=10-this-benchmark-fits-crescents-fast-path-unusually-well&gt;10. This benchmark fits Crescent's fast path unusually well&lt;/h3&gt;&lt;p&gt;This is also where I need to be honest.&lt;/p&gt;
&lt;p&gt;Crescent wins here because this benchmark sits right in its sweet spot: dense numeric columns, a fixed schema, the same pipeline run repeatedly, mostly non-null data, low-cardinality categorical strings, and top-k outputs rather than arbitrary joins or nested types.&lt;/p&gt;
&lt;p&gt;Polars is built for a broader world. It handles nested types, heavy null propagation, ad-hoc expressions, Parquet-first analytics, and large multi-way joins over heterogeneous sources. For those workloads, its abstractions pay for themselves. This workload is not one of them.&lt;/p&gt;
&lt;p&gt;This workload is a stock screener with known columns and repeated filters. Crescent is optimized for exactly that.&lt;/p&gt;
&lt;p&gt;The gap exists because Crescent's engine simply is more efficient: fewer layers, fewer transient allocations, fewer bytes moved, fewer metadata checks in hot loops, and fewer full-frame materializations. Rust itself is not the bottleneck. The design architecture is..this connects to the early section of what I said about it is worth it to take time to think about program's design.&lt;/p&gt;
&lt;p&gt;The benchmark I did was run multiple times (~ not less than 30x) and they always paint the same picture. By running it multiple times, that means the programs were run in a warm cache condition. Here I want to point out Polars' data structure / cache locality / memory management left a lot of things to be desired. The evidence were clear: massive page faults and I/O operations even after running the same program with the same data multiple times one after the other.&lt;/p&gt;
&lt;p&gt;Crescent on the other hand, on the second run I/O ops got down to mostly zero comparatively very low page faults ~ implies proper cache locality which in turn implies proper data structure and memory management. Crescent's flat arrays and bump-arena allocator walk fewer pages. Polars' chunked buffers and metadata-heavy columns keep the kernel busy even when the data is already in RAM.&lt;/p&gt;
&lt;p&gt;That is the real performance story.&lt;/p&gt;
&lt;p&gt;&lt;img src="https://i.postimg.cc/15WxszwG/Crescent-Polars-utime.png" alt="crescent-polars-utime" /&gt;
Note: above is a warm-run of the benchmark, note how Crescent is very clean with zero I/O ops and much less Page Faults compared to Polars-Rust, which have loads of I/O ops and Page Faults even on a warm cache.&lt;/p&gt;
&lt;p&gt;Note on the benchmarks using hyperfine and zoop below, I needed to close all other apps coz Polars would run super slow and the elapsed time would become around 19-20 seconds, compare that to Crescent which has no problem with other apps running. This "embarrassingly parallel" situation is something I often encounter when running Rust programs (ripgrep, polars, when compiling Rust programs) where they burn through lots (if not all) of available cores in order to be "Fast". Blazingly mediocre indeed!&lt;/p&gt;
&lt;p&gt;&lt;img src="https://i.postimg.cc/vZt4Qkbv/hyperfine-crescent-polars.png" alt="hyperfine-00" /&gt;&lt;/p&gt;
&lt;p&gt;&lt;img src="https://i.postimg.cc/P5QCt9T4/zoop-crescent-polars.png" alt="zoop-00" /&gt;&lt;/p&gt;
&lt;h2 id=dx-on-building-crescent&gt;DX on building Crescent&lt;/h2&gt;&lt;p&gt;cforge, safe_c.h, and Crescent together make C feel closer to Python than I expected. But "closer" does not mean "the same." The workflow is different, and understanding where it shines and where it fights you matters if you're thinking about doing something similar.&lt;/p&gt;
&lt;h3 id=what-feels-like-python&gt;What feels like Python&lt;/h3&gt;&lt;p&gt;&lt;strong&gt;Auto-generation eliminates the boring parts.&lt;/strong&gt; I never write struct serialization, CSV adapters, or reflection boilerplate. &lt;code&gt;cforge gen-struct&lt;/code&gt; and &lt;code&gt;cforge reflect&lt;/code&gt; generate thousands of lines of C I would have otherwise typed by hand. The schema changes, I re-run one command, and the code updates.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Incremental builds are fast.&lt;/strong&gt; &lt;code&gt;cforge build main&lt;/code&gt; hashes every source file with SHA-256. Changing one file triggers recompilation of exactly that file across all three stages ~ typically under one second. The sanitizer stage runs automatically, so I find use-after-free bugs within seconds of introducing them, not in production.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;RAII macros remove the cleanup tax.&lt;/strong&gt; In plain C, every error path needs &lt;code&gt;goto cleanup&lt;/code&gt; with a carefully ordered set of frees. With &lt;code&gt;safe_c.h&lt;/code&gt;, I declare &lt;code&gt;AUTO_DataFrame(df)&lt;/code&gt; and the cleanup happens when the variable goes out of scope. The DSL pipelines read top-to-bottom without visual noise from memory management.&lt;/p&gt;
&lt;h3 id=what-does-not-feel-like-python&gt;What does not feel like Python&lt;/h3&gt;&lt;p&gt;&lt;strong&gt;No REPL.&lt;/strong&gt; In Python I filter a column, look at the head, tweak the filter, repeat. In C I edit the pipeline, run &lt;code&gt;cforge build main&lt;/code&gt;, execute, check output, repeat. The loop is tighter than you'd think ~ under a second for incremental builds ~ but it is still a loop, not a conversation. For exploratory data analysis, Python wins.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Error messages are rougher.&lt;/strong&gt; Pandas tells you &lt;code&gt;KeyError: 'ColumnName'&lt;/code&gt;. C tells you your program segfaulted. AddressSanitizer gives you a stack trace pointing to the exact line, which is better than raw C, but it is still not a friendly traceback.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Segfaults happen during development.&lt;/strong&gt; That is the reality of C. The sanitizer catches most of them before release, but you still spend time in GDB occasionally. The trade is explicit: you pay attention to memory in exchange for deterministic performance and no GC pauses.&lt;/p&gt;
&lt;h3 id=the-build-pipeline-as-a-safety-net&gt;The build pipeline as a safety net&lt;/h3&gt;&lt;p&gt;The three-stage build is not ceremony ~ it is a net:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;&lt;strong&gt;Static analyzer&lt;/strong&gt; catches null dereferences and buffer overflows at compile time.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Sanitizer&lt;/strong&gt; catches heap overflows and use-after-free at runtime during the test execution.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Release&lt;/strong&gt; builds with hardening flags once the first two pass.&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;I have caught bugs in stage 1 or 2 that would have been silent data corruption in Python. The cost is a few seconds of build time. The benefit is confidence that the binary is solid before it ever touches real data.&lt;/p&gt;
&lt;h3 id=would-i-do-it-again&gt;Would I do it again?&lt;/h3&gt;&lt;p&gt;For a known pipeline that runs on a schedule ~ a daily stock screener, a report generator, an ETL job with fixed queries ~ absolutely. You write the pipeline, validate it against the Python reference once, and then the C binary runs in 12 ms instead of 590 ms. For a webapp backend, that is the difference between "please wait" and "instant response."&lt;/p&gt;
&lt;p&gt;For exploratory data analysis ~ trying different filters, looking at distributions, prototyping models ~ I would still reach for Pandas or Polars. The REPL workflow is too valuable to give up.&lt;/p&gt;
&lt;p&gt;Will I continue using this custom tools combo? Yes. The only thing that could pull me away is a stable Zig 1.0 release. I have reduced Zig usage because of breaking changes in the pre-1.0 ecosystem, but a stable release could change that. Until then, cforge + safe_c.h is my stack for performance and correctness-critical work.&lt;/p&gt;
&lt;h3 id=a-hrefhttpskommentscloudfbce7e3600755a75495de3comments-section-herea&gt;&lt;a href='https://komments.cloud/fbce7e3600755a75495de3'&gt;Comments section here&lt;/a&gt;&lt;/h3&gt;&lt;p&gt;&lt;mark&gt;If you enjoyed this post, click the little up arrow chevron on the bottom left of the page to help it rank in Bear's Discovery feed and if you got any questions or anything, please use the comments section.&lt;/mark&gt;&lt;/p&gt;
</description>
      <author>hidden (hwisnu)</author>
      <guid isPermaLink="false">https://hwisnu.bearblog.dev/crescent-dataframe-library-in-c23-built-using-safe_ch-and-cforge/</guid>
      <pubDate>Fri, 01 May 2026 04:25:00 +0000</pubDate>
    </item>
    <item>
      <title>you should be joymaxxing your projects</title>
      <link>https://tala.bearblog.dev/you-should-be-joymaxxing-your-projects/</link>
      <description>&lt;p&gt;When I'm working on a passion project (or anything, really), I tend to be obsessively serious in ways that stress me out. Something I initially &lt;em&gt;wanted&lt;/em&gt; to do becomes a thing I &lt;em&gt;have&lt;/em&gt; to do. I've subconsciously cultivated this behavior, as I only ever notice it late: tired, tense, wondering why something I chose to do feels like something I owe, in a sense.&lt;/p&gt;
&lt;p&gt;I'm not always caught up on social media jargon, but I'm interested in the —maxxing suffix I've seen circulating online. I liked the shamelessness of it, the idea you can just decide to maximize something (good), as aggressively and deliberately as you want. Joymaxxing, specifically: making happiness non-negotiable, not a &lt;em&gt;reward&lt;/em&gt; for finishing a task but a &lt;em&gt;condition&lt;/em&gt; maintained throughout the process. It's why my current goal, when working on something, is prioritizing fun.&lt;/p&gt;
&lt;p&gt;The older I get, the more I feel permission to play. In practice, this looks different for everyone: working from a cafe instead of my desk, reading something adjacent to the project just for the pleasure of it, and following interesting tangents. All of these things alters the texture of the work in unexpectedly nice ways, because it reminds me that I'm a human being doing the thing, not a machine producing it.&lt;/p&gt;
&lt;p&gt;The shift isn't from &lt;em&gt;serious&lt;/em&gt; to &lt;em&gt;unserious&lt;/em&gt;. I still passionately care about my projects as I always do. However, I stopped treating "enjoyment"  as a threat to "quality," like if I was having too much fun, I'll generate something mediocre or straight up horrendous. Looking back at it, perhaps the work I'm most proud of, is the work I've enjoyed doing the most.&lt;/p&gt;
</description>
      <author>hidden (tala)</author>
      <guid isPermaLink="false">https://tala.bearblog.dev/you-should-be-joymaxxing-your-projects/</guid>
      <pubDate>Wed, 29 Apr 2026 13:42:00 +0000</pubDate>
    </item>
    <item>
      <title>A year on an e-reader</title>
      <link>https://wombat.bearblog.dev/a-year-on-an-e-reader/</link>
      <description>&lt;p&gt;E-readers are something of a niche technology. Hardly anybody has an e-reader in public, especially compared to the number of people with a smartphone in hand. Despite that, I took the plunge last year and got an Android e-reader — the Meebook M6.&lt;/p&gt;
&lt;p&gt;&lt;a href="https://ibb.co/RTpXKr6c"&gt;&lt;img src="https://i.ibb.co/7tJ3DMpK/image.png" alt="A photo of a screensaver, a cartoon drawing of capybaras, displayed on an e-reader. The Meebook logo is visible on the bottom bezel. The e-reader sits upon a laptop keyboard."&gt;&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;It’s been nothing short of awesome. I wanted to yap about my reading setup, because cool tech is cool, plus a bit about how I approach reading as a hobby.&lt;/p&gt;
&lt;h2 id=why-an-e-reader&gt;Why an e-reader?&lt;/h2&gt;&lt;p&gt;I read primarily for fun and enjoyment. If I bought a physical copy of every book I wanted to read, my apartment would overflow with books. Libraries perform an absolutely vital service, but not every library book is in as good a condition as I’d like. Some books are way too bulky to carry on the go. Others are printed in teeny tiny font sizes (what is this, a book for ants?).&lt;/p&gt;
&lt;p&gt;For several years, I read e-books on my phone, alongside the occasional library book. To this day, I only purchase physical copies for books with sentimental value (or, rarely, can’t find online). Reading on my phone was… alright. Good enough for me to assume that e-readers were largely redundant. After all, many of us read (or skim?) &lt;em&gt;colossal&lt;/em&gt; volumes of text on screens every day. Hell, you’re most likely doing it right now.&lt;/p&gt;
&lt;p&gt;That changed when I was walking around a bougie bookstore that happened to have an e-reader section. I wasn’t (and still am not) convinced that the displays looked anything like real paper, but I was blown away by how comfortable they were to read on. I hadn’t even known I'd been settling for less. That same day, I was back home looking for reviews on budget options to try out myself.&lt;/p&gt;
&lt;h2 id=hardware&gt;Hardware&lt;/h2&gt;&lt;p&gt;I eventually settled on the Meebook M6 for a few reasons. I wanted something that would fit in my pocket, so a screen larger than 7 inches wouldn’t do. At the same time, I wasn’t a fan of the phone-shaped form factor, like that of the Hisense Hi Reader and Boox Palma. I knew from experience that I disliked reading with a bigger font on a narrow screen.&lt;/p&gt;
&lt;p&gt;So that left the 6 inch, regular Kindle-shaped options, though I didn’t exactly want a Kindle because of the &lt;a href='https://goodereader.com/blog/kindle/amazon-removing-download-and-transfer-on-the-kindle-feb-26th'&gt;bullshit&lt;/a&gt; Amazon had been pulling. Luckily for me, the Meebook M6 just so happened to be on sale on Taobao for under 100 USD — it had decent reviews, so I gave it a shot.&lt;/p&gt;
&lt;p&gt;It is a lovely little Android device. Its royal blue bottom bezel and flush screen are immediately pleasing to the eye. It is basic hardware-wise, but there are a couple of neat features.&lt;/p&gt;
&lt;p&gt;It has warm and cool front lights that can be adjusted independently with sliders. The warm light is particularly nice for reading in bed. There are three presets: “Day”, “Night”, and “Bed”. I use “Day” most often, which is most suited to reading in weird indoor lighting (rather than daylight).&lt;/p&gt;
&lt;p&gt;The refresh rate is also adjustable with four presets: from the regular “regal mode” with a slow refresh rate and no ghosting, to the “A2 topspeed mode” with the fastest refresh rate but heavy ghosting. The former is most suited for reading, and is what I use 95% of the time. The latter option is much appreciated when I’m using apps not optimized for the display, especially when scrolling.&lt;/p&gt;
&lt;h2 id=software&gt;Software&lt;/h2&gt;&lt;p&gt;I was not as enthused with the default software experience. I disliked the default launcher and the default reader app was off when rendering English text. The great thing about Android is that it took me about five seconds to ditch them for &lt;a href='https://github.com/gezimos/inkOS'&gt;a new launcher&lt;/a&gt; and &lt;a href='https://koreader.rocks/'&gt;a new reader&lt;/a&gt;.&lt;/p&gt;
&lt;h3 id=koreader-rocks&gt;KOReader rocks!&lt;/h3&gt;&lt;p&gt;I’m a bit anal about customizing my reading experience; KOReader has every feature I could ever wish for. It was a match made in heaven. The vast &lt;a href='https://koreader.rocks/user_guide/#L2-styletweaks'&gt;style tweaks&lt;/a&gt; options really satisfy that urge to have everything look exactly as I want it. I keep a handful of these enabled all the time — shoutout to spacing between paragraphs.&lt;/p&gt;
&lt;p&gt;I was also impressed at how great KOReader is for &lt;a href='https://koreader.rocks/user_guide/#L1-pdfs'&gt;reading PDFs&lt;/a&gt;. The reflow feature works amazingly for many, but not all, PDFs. If it doesn’t, KOReader can automatically scroll to different parts of the screen depending on the reading direction. For example, for a two-column PDF, it can automatically start at the top left, go down until you hit the bottom left, then go up to the top right. Much, much easier than manually zooming in and scrolling.&lt;/p&gt;
&lt;p&gt;All the features can be overwhelming, to be fair. Fortunately, the documentation is pretty robust. And things look good out of the box, even if you don’t mess with anything!&lt;/p&gt;
&lt;h4 id=reading-fonts&gt;Reading fonts&lt;/h4&gt;&lt;p&gt;The English fonts that I rotate between are mostly from &lt;a href='https://github.com/nicoverbruggen/ebook-fonts'&gt;this GitHub repo&lt;/a&gt; containing fonts tweaked for e-reading. Here’s a &lt;a href='https://ebook-fonts.nicoverbruggen.be/'&gt;live showcase&lt;/a&gt; for how each font would look in action. Pretty cool!&lt;/p&gt;
&lt;p&gt;My current default is their version of Charter. For informational non-fiction, I sometimes switch to a sans-serif font like Atkinson Hyperlegible Next or Jost. For (Traditional) Chinese, I use &lt;a href='https://github.com/chiron-fonts/chiron-sung-hk/blob/release/README.en.md'&gt;Chiron Sung HK&lt;/a&gt;, an aesthetically pleasing serif font.&lt;/p&gt;
&lt;p&gt;This is how I've gotten things to look:
&lt;a href="https://ibb.co/7xdXnGYf"&gt;&lt;img src="https://i.ibb.co/Gf40WVx6/Reader-jack-london-white-fang-epub-p5-2026-04-30-024656.png" alt="A screenshot of my current reading setup in KOReader, with text from White Fang by Jack London."&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h2 id=where-to-get-e-books&gt;Where to get e-books&lt;/h2&gt;&lt;p&gt;I sideload most of my books as EPUBs, and occasionally PDFs. This means I need my books DRM-free — which most e-books being sold are not. Apart from the obvious solution&lt;sup class="footnote-ref" id="fnref-1"&gt;&lt;a href="#fn-1"&gt;1&lt;/a&gt;&lt;/sup&gt;, here are a few (100% legit!) suggestions for where to get e-books you actually own:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href='https://www.gutenberg.org/'&gt;Project Gutenberg&lt;/a&gt; is well-known for hosting a huge collection of public domain works. However, their earlier titles tend to be inconsistent in quality.&lt;/li&gt;
&lt;li&gt;My go-to for older English works is &lt;a href='https://standardebooks.org/ebooks'&gt;Standard Ebooks&lt;/a&gt;. Despite their smaller collection, each book is meticulously proofread and beautifully formatted to elevate your reading experience. Seriously, they look &lt;em&gt;gorgeous&lt;/em&gt;.&lt;/li&gt;
&lt;li&gt;&lt;a href='https://www.kobo.com/us/en/p/drm-free'&gt;Kobo&lt;/a&gt; and &lt;a href='https://www.ebooks.com/en-hk/drm-free/'&gt;eBooks.com&lt;/a&gt; are online storefronts with decently sized DRM-free catalogs. (The most popular fiction titles are those published by Tor, like Brandon Sanderson’s novels.)&lt;/li&gt;
&lt;li&gt;&lt;a href='https://libreture.com/bookshops/'&gt;Here&lt;/a&gt; is an extensive list of publishers that offer DRM-free books directly.&lt;/li&gt;
&lt;li&gt;If you happen to have institutional access through a university or library, certain publishers allow DRM-free downloads. I find this especially handy for academic publications or textbooks. I recently downloaded several books from Oxford University Press this way.&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id=what-ive-learned&gt;What I’ve learned&lt;/h2&gt;&lt;p&gt;Sometimes, I get comments when I’m reading in public. People have told me, with a touch of regret, that they could never get through a whole book, or that the last time they picked up a book was in sixth grade. It might surprise them that I have struggled with keeping up a reading habit, too.&lt;/p&gt;
&lt;p&gt;I don’t know how common this is, but I will confess that I got this device with the expectation that I would annihilate my entire backlog. A misguided approach, of course. The more pressure I put on myself to read, the less appealing and more overwhelming it felt. This was despite the fact that picking up a book was more easy and effort-free than ever. The sole barrier was my mind.&lt;/p&gt;
&lt;p&gt;If you’re anything like me, it’s easy to hype something up in your head — even highly enjoyable and rewarding things — and end up demotivating yourself. I still find myself having to adjust my mindset to get over these hurdles. Some of this includes:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Learning how to forgive myself for not being consistent day-to-day. Sometimes, consistency can look more like week-to-week, or month-to-month. Even then, it’s okay to fall off or take a break. I can come back anytime.&lt;/li&gt;
&lt;li&gt;In a similar vein, letting myself put books on hold or drop them — whether I’m overwhelmed, exhausted, or just don’t enjoy them. I can pick them back up whenever I want to.&lt;/li&gt;
&lt;li&gt;Approaching books with wonder and curiosity, rather than focusing on getting through as many as possible. Learning new stuff and gaining new perspectives is fun and awesome.&lt;/li&gt;
&lt;li&gt;Being mindful of when a passage I’m reading is impacting me, and really feeling that wave of emotion. I find this heightens the experience and pushes me to get more into it.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Much of the above applies to other hobbies as well. While it’s easier said than done, maintaining intrinsic motivation is essential for hobbies, which are so vital to a meaningful life. Most importantly, have fun!&lt;/p&gt;
&lt;hr /&gt;
&lt;div style="text-align: center;"&gt;
&lt;a class="previous-post" href="/steamed-hams" title="My favorite meme: Steamed Hams"&gt;Previous&lt;/a&gt; | &lt;a class="next-post" href="/skol" title="skol"&gt;Next&lt;/a&gt;
&lt;/div&gt;
&lt;div style="text-align: center;"&gt;
&lt;a href="mailto:mortalwombat@proton.me"&gt;Reply via email&lt;/a&gt; 
&lt;/div&gt;
&lt;hr /&gt;
&lt;section class="footnotes"&gt;
&lt;ol&gt;
&lt;li id="fn-1"&gt;&lt;p&gt;It surprised me that depending on the jurisdiction, removing DRM from e-books you have purchased for personal use might be in &lt;a href='https://answers.justia.com/question/2025/03/23/is-it-legal-to-remove-drm-from-kindle-eb-1054254'&gt;a legal gray area&lt;/a&gt;. Personally, I find it difficult to argue against on moral grounds (compared to piracy).&lt;a href="#fnref-1" class="footnote"&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;
&lt;/section&gt;
</description>
      <author>hidden (wombat)</author>
      <guid isPermaLink="false">https://wombat.bearblog.dev/a-year-on-an-e-reader/</guid>
      <pubDate>Wed, 29 Apr 2026 18:54:00 +0000</pubDate>
    </item>
    <item>
      <title>reddit now forces you to use their app on mobile</title>
      <link>https://pseudosingleton.com/reddit-now-forces-you-to-use-their-app-on-mobile/</link>
      <description>&lt;p&gt;One of my personal rules for social media is that I only use the web version of sites. The web version is better for privacy, far less addictive, and makes my phone less bloated.&lt;/p&gt;
&lt;p&gt;In the last few days, Reddit added a great new feature to the the mobile web version of their site.&lt;/p&gt;
&lt;p&gt;Now, when you try to browse their site on your phone, you will encounter the below paywall-esque popup after scrolling a bit.&lt;/p&gt;
&lt;div class="preview"&gt;
&lt;img src="https://bear-images.sfo2.cdn.digitaloceanspaces.com/pseudosingleton/image-1.webp" alt="Reddit displays a popup saying 'Get the app to keep using reddit.' Really should say 'get the app so we can serve you better ads.'"/&gt;
&lt;/div&gt;
&lt;p&gt;You cannot dismiss the popup. You must download the app or request a desktop version of the site.&lt;/p&gt;
&lt;p&gt;I guess finally a good enough nudge to finally quit Reddit completely.&lt;/p&gt;
&lt;p&gt;No love lost&lt;sup class="footnote-ref" id="fnref-1"&gt;&lt;a href="#fn-1"&gt;1&lt;/a&gt;&lt;/sup&gt; ¯\_(ツ)_/¯&lt;/p&gt;
&lt;section class="footnotes"&gt;
&lt;ol&gt;
&lt;li id="fn-1"&gt;&lt;p&gt;Okay, some love lost. Reddit contains many of the remaining forums on the internet, so it sucks those won't be reachable anymore.&lt;a href="#fnref-1" class="footnote"&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;
&lt;/section&gt;
</description>
      <author>hidden (pseudosingleton)</author>
      <guid isPermaLink="false">https://pseudosingleton.com/reddit-now-forces-you-to-use-their-app-on-mobile/</guid>
      <pubDate>Fri, 01 May 2026 02:02:00 +0000</pubDate>
    </item>
    <item>
      <title>rejecting convenience</title>
      <link>https://rnotte.art/rejecting-convenience/</link>
      <description>&lt;p&gt;as i grew through my adult years, one thought has been running through my head over and over again:&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;"how much have humans been sacrificing in the name of 'convenience'?"&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;that word... the word &lt;strong&gt;"convenience"&lt;/strong&gt; has become my least favorite word in the english language. i can't stand it. so many things that are bad for humans' social lives, health, and well-being, are consistently used because they're &lt;strong&gt;"convenient"&lt;/strong&gt;. why bother going to the brick-and-mortar store? amazon is more &lt;strong&gt;"convenient"&lt;/strong&gt;. why bother cooking a nice meal for yourself? doordash and uber eats are more &lt;strong&gt;"convenient"&lt;/strong&gt;. why go out and socialize with people? facebook is more &lt;strong&gt;"convenient"&lt;/strong&gt;. why use a digital camera, camcorder, or polaroid? your smartphone is more &lt;strong&gt;"convenient"&lt;/strong&gt;. why bother going to the theater or concerts? netflix and spotify are more &lt;strong&gt;"convenient"&lt;/strong&gt;. why bother making art? asking an AI to generate it for you is more &lt;strong&gt;"convenient"&lt;/strong&gt;.&lt;/p&gt;
&lt;p&gt;well, i say nuts to that. from now on, i'm going to make my life as &lt;em&gt;inconvenient&lt;/em&gt; as possible. i'm going to go to the store and buy stuff in person. i'm going to make my own food with my own hands. i'm going to socialize with people face-to-face. i'm going to use a true camera instead of my phone's camera. i'm going to buy blu-rays, DVDs, and CDs instead of streaming. i'm going to take my time when creating, watching, playing, and reading a work of art.&lt;/p&gt;
&lt;p&gt;i don't want to sound high-and-mighty with this. you're not a bad person for using streaming services. but not only are these &lt;strong&gt;"convenient"&lt;/strong&gt; systems costing more money, they're costing humans' social lives and life skills, the most important things that make us human. so, in the interest of keeping my humanity, i'm going to live my life the &lt;em&gt;inconvenient&lt;/em&gt; way.&lt;/p&gt;
&lt;p&gt;and if you can, i invite you to join me in rejecting &lt;strong&gt;convenience&lt;/strong&gt;.&lt;/p&gt;
</description>
      <author>hidden (rnotte)</author>
      <guid isPermaLink="false">https://rnotte.art/rejecting-convenience/</guid>
      <pubDate>Wed, 29 Apr 2026 21:11:24 +0000</pubDate>
    </item>
    <item>
      <title>Your Instagram account is scheduled for deletion</title>
      <link>https://stitching.bearblog.dev/your-instagram-account-is-scheduled-for-deletion/</link>
      <description>&lt;p&gt;Yesterday, I deleted one of my Instagram accounts. I had two, actually. The first one was used when I was in high school and college, and that was where I followed classmates and friends. The only photos I uploaded there were more personal - a graduation photo, some more artsy pictures that were taken when I traveled, that kind of thing. I hadn’t checked that account in months, so I decided to just pull the plug on it. The second account was mostly used for things involving my hobbies or for following local businesses, and that was the one that was harder to let go of. &lt;br&gt;&lt;/p&gt;
&lt;p&gt;I tried to think about why that was, and I listed out some reasons. &lt;br&gt;&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;&lt;p&gt;I follow along with what local businesses are doing. For example, what special events is the yarn store or bookstore putting on for the community this month? Has a coffee shop or food truck changed its seasonal menu?&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;&lt;p&gt;I follow a lot of crafters, and seeing other peoples’ projects can inspire me and introduce me to new patterns, designers, and crafting techniques. I occasionally like and comment on these things and sometimes I’ll post pictures to show off my work, too.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Every Tuesday, the local bookstore’s account will post new releases to their story. I get introduced to interesting books that I can add to my TBR (to be read) list.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;&lt;p&gt;I watch reels (funny things and recipes, mostly) and share them with friends or save the recipes to try later. I’ve found some great recipes this way!&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;I didn’t really… interact with other people on Instagram in a meaningful way, though. To ease my reluctance to deactivate my account, I revisited each point and thought about what I could do to still achieve these things outside of an Instagram account. &lt;br&gt;&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;&lt;p&gt;I can go onto individual businesses’ websites and sign up for email newsletters to stay updated on their events. Most local businesses I have in mind already have websites set up.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;&lt;p&gt;I’m already subscribed to a few crafting subreddits. I can browse through those or share my projects there, and I can still get inspiration from those pages.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;&lt;p&gt;New books are released every Tuesday. I checked my local bookstore’s website and they have a page for new releases. I can just bookmark that page and check it manually rather than viewing the bookstore’s Instagram story every Tuesday.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;&lt;p&gt;I don’t need to view short-form video content, especially as I know how bad it is for our brains and our attention spans. I don’t need funny reels to laugh and I don’t need recipe reels to cook. I own several cookbooks and (of course) I follow some cooking and baking subreddits if I really want to go looking for inspiration.&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;Making this list and writing it out really helped me to realize that there is nothing unique or exclusive that I can only get from Instagram. Afterwards, I went ahead to the settings page of my second account and “scheduled my account’s deletion” — because Meta doesn’t allow you to delete your account immediately. It schedules it out by one month, probably in the hopes that you’ll change your mind before that month is up. &lt;br&gt;&lt;/p&gt;
&lt;p&gt;I feel good, though! This means that since the start of 2026, I’ve deleted my Facebook and Instagram accounts. I’m not totally trying to step away from all websites, but I do want to leave most social medias that aren’t benefitting me in any way. I think Reddit will be harder to stay away from, and I am working to minimize the time spent on that site, at least. I think I’m off to a good start!&lt;/p&gt;
</description>
      <author>hidden (stitching)</author>
      <guid isPermaLink="false">https://stitching.bearblog.dev/your-instagram-account-is-scheduled-for-deletion/</guid>
      <pubDate>Wed, 29 Apr 2026 20:57:00 +0000</pubDate>
    </item>
    <item>
      <title>独立博客，难在不够独立</title>
      <link>https://blog.solazy.me/20260429/</link>
      <description>&lt;p&gt;&lt;img src="https://bear-images.sfo2.cdn.digitaloceanspaces.com/sol/saurav-mahto-ijwb7urjqyo-unsplash.webp" alt="saurav-mahto-ijWb7URJQyo-unsplash" /&gt;&lt;/p&gt;
&lt;p&gt;经常能听到一种声音，说在当下的互联网环境里，坚持写独立博客是一件极难的事情。这种「难」往往被指向几个维度：技术维护的琐碎、内容的持续产出、流量的匮乏以及社交反馈的缺失。&lt;/p&gt;
&lt;p&gt;但在我看来，如果一个人觉得独立博客难，很大程度上是因为他还没有实现真正的「独立」。&lt;/p&gt;
&lt;p&gt;我们得承认，在这个世界上并没有什么事情是绝对简单的。但「难」与「难」之间存在本质的区别。如果你依附于某家公司，或者身处某个自媒体矩阵之中，那种「难」是具体的、有压迫感的。&lt;/p&gt;
&lt;p&gt;你需要考虑更新频次，需要维持内容的垂直度，需要精准地捕捉算法的喜好，因为那是你的 KPI，是你获取生存资源的筹码。在那样的语境下，「难」是常态，因为你的意志必须服从于某种商业逻辑或平台规则。&lt;/p&gt;
&lt;p&gt;可是，当你选择「独立博客」这个载体时，你已经从上述的所有枷锁中抽身而出了。既然已经「独立」了，这种「难」的感触究竟是从哪里来的？&lt;/p&gt;
&lt;p&gt;很多时候，这种压力是自己强加给自己的。&lt;/p&gt;
&lt;p&gt;独立博客不应该仅仅是技术手段上的自主，即拥有一台服务器、一个域名和一套自己挑选的主题。它更应该是一个人独立意志的具现化。如果你的博客内容、更新节奏甚至遣词造句的风格，依然在随着他人的意志或外界的评价而不断摇摆，那么这依然称不上是独立博客。它只是一个被你搬到了私人领地上的「朋友圈」或者「营销号」。&lt;/p&gt;
&lt;p&gt;有些人觉得难，是因为他们太看重「读者」了。他们预设了一个庞大的观众群体，总觉得自己写出来的东西必须得给别人带去点什么，或者必须得到某种肯定和回馈。其实这未免有些太高看自己了。在信息过载的今天，并没有多少人会时刻盯着你的思考。&lt;/p&gt;
&lt;p&gt;我们要看清一件事情的主次。独立博客首先是用来记录和呈现自我的工具。它是一个树洞，也是一座实验室，它是为了满足我们自身表达和梳理思维的欲望。如果我们能把「记录自己」放在「谋求肯定」之前，那些所谓的坚持不下去的「难」瞬间就会消散。&lt;/p&gt;
&lt;p&gt;因为为你自己做事是不需要「坚持」的，只有为别人做事才需要。&lt;/p&gt;
&lt;p&gt;从这个角度出发，独立博客难不难，其实取决于一个人人格内核的完整度。凡事如果能做到向内求，事情就会变得顺理成章。独立博客这种极其私人的产物，理应完全为自己考量。它不应该为了凑够某个字数而堆砌文字，也不应该为了迎合某种流行话题而强行发声。&lt;/p&gt;
&lt;p&gt;当一个人不再向外寻求认同，不再被那种「必须产出精品」的虚荣心所绑架时，博客就不再是一项任务，而是一种生活方式。&lt;/p&gt;
&lt;p&gt;所以，如果你依然在为如何运营好一个独立博客而感到痛苦，或许可以停下来审视一下：你是在为谁而写？你所追求的独立，究竟是代码层面的独立，还是精神层面的独立？&lt;/p&gt;
&lt;p&gt;当你真正做到精神上的独立，不再为他人的独立意志而改变自己的表达时，你会发现，在这个自由的自留地里，根本不存在「难」这种说法。剩下的只有表达的快感，和与自己对话的宁静。&lt;/p&gt;
</description>
      <author>hidden (sol)</author>
      <guid isPermaLink="false">https://blog.solazy.me/20260429/</guid>
      <pubDate>Wed, 29 Apr 2026 07:00:00 +0000</pubDate>
    </item>
    <item>
      <title>Who knows that you blog?</title>
      <link>https://forkingmad.blog/who-knows-that-you-blog/</link>
      <description>&lt;p&gt;Question for the audience:  Do you tell people you blog?&lt;/p&gt;
&lt;p&gt;I was reflecting on this recently, after I inadvertently mentioned to a colleague that I had blogged about a topic a few days earlier.  That, of course, started the questioning from them: Oh, what's it called?  What do you write about?  Can I see it?&lt;/p&gt;
&lt;p&gt;I reluctantly recited the web address.  I have no idea if they did/will look.  But then I wondered:  Why was I so restrained to promote my blog to a real-life-person-type.&lt;/p&gt;
&lt;h2 id=public-is-public&gt;Public is Public&lt;/h2&gt;&lt;p&gt;I totally accept that everything I say online is Public.  I often remind people that what they are saying on &lt;em&gt;the socials&lt;/em&gt; could haunt them in the future.   Nothing is really private on the internet.&lt;/p&gt;
&lt;p&gt;My test for what I recount online is simple:&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;Would I stand in the middle of a town-square, and proclaim this to anyone who stops to listen?&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;If the answer is No, then I would not write it online.&lt;/p&gt;
&lt;h2 id=nothing-to-hide&gt;Nothing to hide&lt;/h2&gt;&lt;p&gt;I've therefore nothing to hide in what I have said. I stand by it all.&lt;/p&gt;
&lt;p&gt;So why do I not tell &lt;em&gt;people&lt;/em&gt; that I blog?&lt;/p&gt;
&lt;p&gt;I'm not sure I have the answer!  I am not ashamed.  I just feel that this is my little outlet, that requires no discussion with my friends or family!  My other-half knows I blog but has never once asked what I blog about, or what the address is!&lt;/p&gt;
&lt;p&gt;I appreciate some people like to be anonymous when blogging.  That's their choice.  I am an open book.  You can ask me anything and I will reply honestly, as long as it conforms to my &lt;em&gt;town-square&lt;/em&gt; rule.&lt;/p&gt;
&lt;p&gt;Back to you:  Do you tell people you blog?&lt;/p&gt;
&lt;p&gt;&lt;img src="https://bear-images.sfo2.cdn.digitaloceanspaces.com/forkingmad/communityechoes-1.webp" alt="Community Echoes" /&gt;&lt;/p&gt;
&lt;div style="border: 1px dashed; font-size: 85%; padding: 10px; line-height:1.4em; box-shadow: rgba(0, 0, 0, 0.25) 0px 25px 50px -12px; border-radius:10px;"&gt;
&lt;p&gt;Here are a few other blog posts responding to my question. &lt;a href='https://forkingmad.blog/contact/'&gt;Shout&lt;/a&gt; if you want your own added.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href='https://thatalexguy.dev/re-who-knows-that-you-blog'&gt;Alex White&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href='https://rausgerufen.de/who-knows-that-you-blog'&gt;Rausgerufen / Ransomed&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href='https://kevquirk.com/who-knows-that-you-blog'&gt;Kevin Quirk&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href='https://wrywriter.ca/posts/re-who-knows-that-you-blog'&gt;the Wry Writer&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href='https://www.gordonmclean.co.uk/2026/04/29/who-knows-that-you-blog/'&gt; Gordon @ Happily Imperfect&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href='https://kaigulliksen.com/re-who-knows-that-you-blog/'&gt;Kai Gulliksen&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href='https://ptrbrynt.com/posts/re-who-knows-that-you-blog/'&gt;Peter Bryant&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href='https://firesphere.dev/articles/re-who-knows-that-you-blog'&gt;Simon @ Firesphere&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href='https://martinvukovic.com/posts/2026/04-30-quien-sabe.html'&gt;Martín&lt;/a&gt; 🇪🇸&lt;/li&gt;
&lt;li&gt;&lt;a href='https://blog.gridranger.dev/who-knows-that-you-blog/'&gt;Dávid Bárdos&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href='https://maintz.org/re-who-knows-that-you-blog'&gt;Melli&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href='https://rnotte.art/re-who-knows-that-you-blog/'&gt;RNOTTÉ&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;/div&gt;
&lt;div id="comments"&gt;&lt;/div&gt;
&lt;script src="https://pure.komments.cloud/public/embed.js" defer&gt;&lt;/script&gt;
&lt;div class="bubbles-vote"&gt;&lt;/div&gt;
&lt;script src="https://bubbles.town/vote.js" defer&gt;&lt;/script&gt;
</description>
      <author>hidden (forkingmad)</author>
      <guid isPermaLink="false">https://forkingmad.blog/who-knows-that-you-blog/</guid>
      <pubDate>Tue, 28 Apr 2026 17:28:00 +0000</pubDate>
    </item>
    <item>
      <title>"People who don't use AI will be left behind"</title>
      <link>https://migrainebrain.bearblog.dev/people-who-dont-use-ai-will-be-left-behind/</link>
      <description>&lt;p&gt;"People who don't use AI will be left behind", they say.
I can't emphasize enough how much I hate it when I hear/read shit like that because I'm pretty sure, in fact, that what will happen is the exact opposite.&lt;/p&gt;
&lt;p&gt;People who rely on AI are the ones who will be left behind. They'll forget how to think, how to write, how to do a simple reliable search, how to tell fact from fiction... they'll forget how to fucking LEARN.
I think that's the part that makes me the saddest. What a beautiful thing it is just to learn stuff.&lt;/p&gt;
&lt;p&gt;If you believe chat GPT can do better than you, why would you just let it? Why wouldn't you aim to be better, to learn how to be or do something that AI would never?&lt;/p&gt;
&lt;hr /&gt;
&lt;div class="reply-email"&gt;
  &lt;a href="mailto:hablacomigo@proton.me?subject=Re:%20&amp;quot;People who don&amp;#x27;t use AI will be left behind&amp;quot;"&gt;Reply to this post.&lt;/a&gt;
&lt;/div&gt;
or don't.
&lt;hr /&gt;
&lt;p&gt;&lt;a class="previous-post" href="/being-rude-anonymously-online" title="being rude anonymously online"&gt;Previous&lt;/a&gt; &lt;/p&gt;
</description>
      <author>hidden (migrainebrain)</author>
      <guid isPermaLink="false">https://migrainebrain.bearblog.dev/people-who-dont-use-ai-will-be-left-behind/</guid>
      <pubDate>Tue, 28 Apr 2026 17:33:00 +0000</pubDate>
    </item>
    <item>
      <title>It's time we stopped calling them tweets</title>
      <link>https://guinevak.bearblog.dev/its-time-we-stopped-calling-them-tweets/</link>
      <description>&lt;p&gt;I know a lot of people are still mourning the breakup of their toxic relationships with the Hellpit Formerly Known As Twitter, but it's time to move on.&lt;/p&gt;
&lt;p&gt;And in token of that, I vote we start calling X posts something more appropriate, like, say, Xcretions.&lt;/p&gt;
</description>
      <author>hidden (guinevak)</author>
      <guid isPermaLink="false">https://guinevak.bearblog.dev/its-time-we-stopped-calling-them-tweets/</guid>
      <pubDate>Tue, 28 Apr 2026 16:22:00 +0000</pubDate>
    </item>
    <item>
      <title>The balance between privacy and oversharing</title>
      <link>https://puppynet.work/the-balance-between-privacy-and-oversharing/</link>
      <description>&lt;p&gt;I care a lot about privacy. I always advocate for services that don't collect or sell data. I use VPNs, ad blockers, and am usually careful about sharing too much stuff about me online. This blog goes against that third point, and most of what I write is directly related to myself.&lt;/p&gt;
&lt;p&gt;Unfortunately, I love talking about things (and myself). The stuff that I post on this blog could dox me or allow people who were interested to build a profile of me. I think about it often when I see the sorts of stuff that other people post on Bear, and in my head I know that 95% of the time it's probably not a big deal to be a little more specific about your personal life.&lt;/p&gt;
&lt;p&gt;This results in a dilemma where I have some super cool and interesting (I think) topics but don't want to share them because it's personally identifiable information. I have my age in my Discord about me, I have my timezone and I have a link to this blog too. Those are things that would make it very easy for people to find out more about me—without me even knowing who they are.&lt;/p&gt;
&lt;p&gt;I don't want to be saying that you should never speak about yourself online ever, use a fake name, email, birthday, address, job, etc. However, I'm also saying I think it's probably a bad idea to go around posting pictures of your house, or sharing your daily commute, or videos of your car with the license plate unblurred.&lt;/p&gt;
&lt;p&gt;Since the rise of Facebook and its push for using real names and identities online, some people see it as weird to use a pseudonym in place of their legal name. In the past year or so, governments around the world have been increasingly pushing for the removal of digital privacy, and the linking of personal profiles and internet accounts. I don't like this at all, and I feel that people have the right to be anonymous online, no matter who they may be. Any information about themselves that someone chooses to reveal should be the only things you can learn about a person.&lt;/p&gt;
&lt;p&gt;Privacy is a human right, and companies shouldn't have the ability to strip you of that right for their own monetary gain. The things I post about on my blog are the things I choose to share, and not things that I &lt;em&gt;have&lt;/em&gt; to share. Sure, they may give information about myself away, but I have the ability to remove posts, and more simply, to not write about things that could lead to that.&lt;/p&gt;
&lt;p&gt;&lt;a href="mailto:puppy@puppynet.work?subject=Post:%20The balance between privacy and oversharing"&gt;Reply by email&lt;/a&gt;&lt;/p&gt;
&lt;hr&gt;
&lt;small&gt;This post was last updated 1 day, 4 hours ago.&lt;/small&gt;
</description>
      <author>hidden (networkpuppy)</author>
      <guid isPermaLink="false">https://puppynet.work/the-balance-between-privacy-and-oversharing/</guid>
      <pubDate>Wed, 29 Apr 2026 21:55:00 +0000</pubDate>
    </item>
    <item>
      <title>You Won't Be Left Behind if You Don't Use AI</title>
      <link>https://heychat.bearblog.dev/you-wont-be-left-behind-if-you-dont-use-ai/</link>
      <description>&lt;p&gt;Every time I see "people who don't use AI will be left behind," I roll my eyes. When &lt;a href='http://migrainebrain.bearblog.dev/people-who-dont-use-ai-will-be-left-behind/' target='_blank'&gt;migraine brain's post&lt;/a&gt; came up on the trending Bearblog posts I gave out a little "huzzah!" because I love seeing the contrarian views.&lt;/p&gt;
&lt;p&gt;You know what I also heard that about constantly 10 years ago? &lt;a href='https://tcaflisch.medium.com/the-evolution-of-smart-home-technology-c3b3f918fc8c' target='_blank'&gt;Smart homes&lt;/a&gt;. I'm not a person who has monitored economics over my decades of life, but I've been interested in technology and how people use it since I was a kid. These passive observations have become patterns over time, and it turns out lots of people have done research and written papers on what I've seen. I'll take some cues from them to help you see what I'm seeing.&lt;/p&gt;
&lt;p&gt;We're currently in the "Gold Rush" phase of AI, a 5- to 10-year window where companies are blindly seeking "AI skills" without a clear definition of what that means. Companies are blindly adopting AI products without considering the user workflow or whether it makes sense. Futurists are making wild statements about the future of AI based on how they'd ilke it to progress. If you're not using an LLM to draft your emails or build your slide decks, the rhetoric suggests you're making yourself obsolete.&lt;/p&gt;
&lt;p&gt;If you just look back into recent history you'll see several different possible endings depending on how people choose to use the technology: the most transformative technologies become invisible, moving from revolution to lifestyle choices. The reactions we're seeing are in perfect alignment with the &lt;a href='https://en.wikipedia.org/wiki/Technology_adoption_life_cycle' target='_blank'&gt;Technology Adoption Life Cycle&lt;/a&gt;. We are seeing the same patterns we saw with the dawn of the personal computer, word processing, and the internet.&lt;/p&gt;
&lt;p&gt;In the 1990s there was a persistent fear that if you didn't learn to "code" or couldn't talk about all aspects of a LAN, you would become unemployable. For several years, webmasters were the highest paid people in the office because they possessed specialized skills in this new world. As the technology matured, it leveled out. Today being "good at the internet" isn't a job skill, it's a baseline utility, and you don't need to know anything about how it technically works to use it. There are millions of successful professionals who don't even use the internet in their jobs. The technology didn't replace or dictate the future of the workforce like initially predicted.&lt;/p&gt;
&lt;p&gt;The "Smart Home" is the best metaphor I can think of when it comes to AI. A decade ago, tech enthusiasts predicted that every home would be a fully automated hub of &lt;a href='https://en.wikipedia.org/wiki/Internet_of_things' target='_blank'&gt;Internet of Things (IoT)&lt;/a&gt; devices, and those who didn't adapt would live in "dumb," inefficient homes. Today, our dumb houses are functioning perfectly well. In fact, I think today there are so many ads all over smart home devices that people are grateful they never adopted the "smart home" approach. I don't think AI will be any different. It enhances if you choose to use it, but you're not at a loss if you don't. Smart home adoption is a niche now. After the initial decade of frantic adoption and panic, AI will likely settle into the background as a tool for those who want it, while leaving the core of human work (judgment, empathy, physical presence) largely untouched.&lt;/p&gt;
&lt;p&gt;We are also seeing this "evening out" with social media and smart watches. There was a period where "digital presence" was advertised as "mandatory" for any professional brand, yet we are seeing a trend of executives and creatives going out of their way to avoid the &lt;a href='https://aandrewdunn.medium.com/how-to-turn-off-and-drop-out-of-the-attention-economy-2e68f834d842' target='_blank'&gt;attention economy&lt;/a&gt;. While smart watches offer health insights, they haven't made analog watches obsolete. These technologies transitioned from being "the only path forward" to being "another option if that's what you prefer."&lt;/p&gt;
&lt;p&gt;We'll likely continue to see AI-first hiring for a decade, which will be frustrating for people who prefer not to use it. However, &lt;a href='https://www.gartner.com/en/research/methodologies/gartner-hype-cycle' target='_blank'&gt;the hype will reach its peak and plateau&lt;/a&gt;, and &lt;a href='https://www.pewresearch.org/internet/2014/08/06/future-of-jobs/' target='_blank'&gt;"AI specialist" roles will likely be folded back into general roles&lt;/a&gt;. The "human in the loop" will remain the most valuable asset because while AI can simulate "what," it struggles with "why."&lt;/p&gt;
&lt;p&gt;You won't be left behind by a machine. You'll be living in a world where the machine is another appliance to use to some goal or end. Some people will swear by it but others will find it completely unnecessary and live/work/play fine without it.&lt;/p&gt;
</description>
      <author>hidden (heychat)</author>
      <guid isPermaLink="false">https://heychat.bearblog.dev/you-wont-be-left-behind-if-you-dont-use-ai/</guid>
      <pubDate>Thu, 30 Apr 2026 16:46:00 +0000</pubDate>
    </item>
    <item>
      <title>Can You Reverse Brain Rot?</title>
      <link>https://untangled.bearblog.dev/brain-rot/</link>
      <description>&lt;p&gt;I’ve been following Cal Newport for some time (known for his book &lt;a href='https://calnewport.com/deep-work-rules-for-focused-success-in-a-distracted-world/' target='_blank'&gt;Deep Work&lt;/a&gt; and &lt;a href='https://www.goodreads.com/book/show/40672036-digital-minimalism/' target='_blank'&gt;Digital Minimalism&lt;/a&gt;). I’ve come to look forward to his weekly podcasts on attention, focus, and life without social media.&lt;/p&gt;
&lt;p&gt;Today, he dropped a podcast titled “How Do I Build Cognitive Fitness?” (Alternatively, his &lt;a href='https://m.youtube.com/watch?v=U9a2_KqzF7Y' target='_blank'&gt;YouTube video&lt;/a&gt; with the same content was titled &lt;em&gt;“How Do I Reverse Brain Rot?”&lt;/em&gt;, so I’m curious which one got more clicks).&lt;/p&gt;
&lt;p&gt;In it, he outlines five ways that we can improve our cognitive fitness and challenges us to be more intentional with our time. If you want to &lt;a href='https://m.youtube.com/watch?v=U9a2_KqzF7Y' target='_blank'&gt;give it a listen&lt;/a&gt; he covers it in the first half of this episode.&lt;/p&gt;
&lt;p&gt;Here are the five things he suggested, and my thoughts on each one.&lt;/p&gt;
&lt;hr /&gt;
&lt;h3 id=read-every-day&gt;Read every day.&lt;/h3&gt;&lt;p&gt;He suggests that the more time you spend reading, the more you’re rewiring your brain. It gives you practice in &lt;mark&gt;“aiming your minds eye at a desired internal target,”&lt;/mark&gt; such as a thought or idea. He recommends starting with books you’re excited to read, not books you feel you “should” read. Fun books, trashy books, romance books, they all count. Start with 15-20 pages a day. Read at lunch and before bed. After you’re doing that regularly, increase to 30-50 pages a day. As that gets easier, make 1 out of 3 books “hard” or more challenging.&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;I feel good about reading every day, though I can definitely improve on the number of pages. I’m likely still in the first category, reading 10-20 pages a day. Not quite to 50 pages a day. Though I do think I’m reading more challenging stuff sometimes. I’m currently reading Dune Messiah I’m definitely slowing down to digest the passages.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;h3 id=dont-avoid-writing&gt;Don’t avoid writing.&lt;/h3&gt;&lt;p&gt;Cal states that many people are writing less than ever before due to generative AI, and that &lt;em&gt;“to improve your cognitive fitness you should seek out as many opportunities to write as possible.”&lt;/em&gt; He goes on to say that &lt;mark&gt;"we feel naturally resistant to it because of how many moving pieces there are in our brains.”&lt;/mark&gt; Writing is hard and we feel strain when we write, but it provides us with more &lt;em&gt;“cognitive strength”&lt;/em&gt;. He recommends studying technique while you read, writing in a journal/newsletter/blog (&lt;em&gt;ahem&lt;/em&gt;), and acclimating yourself to getting over the first 10 minutes of writing as they are the hardest to work through.&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;Why do you think I’m here? Ha! I’m trying to get better at constructing my thoughts into sentences and paragraphs. And yeah, it’s hard. But I figure the more I do it, the easier it will start to feel. I’m also trying to write by hand more, but I need to find better opportunities to write when I’m not getting interrupted.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;h3 id=go-on-thinking-walks&gt;Go on thinking walks.&lt;/h3&gt;&lt;p&gt;Cal suggests that we take walks several times a week without our phone (and if we do bring our phones we should make them very inaccessible, like at the bottom of our bag). &lt;mark&gt;“Practice turning your attention inwards to make sense of some information.”&lt;/mark&gt; Brainstorming or day dreaming counts here. Reflection is “where you develop your sense of self” and our best ideas come from it. He recommends journaling your insights after your walk, that it will help you clarify internal thinking.&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;I might have to take on this one. I’m not good at getting out of the house (I work remotely), and I need to be better about it. The weather is warming up though, which will make it easier. Plus it’ll get me away from my desk more often. Something semi-related is that our brains come up with some of our best ideas while we’re in the shower, and I think the two are related.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;h3 id=plug-in-your-phone&gt;Plug in your phone.&lt;/h3&gt;&lt;p&gt;Cal recommends keeping your phone plugged in and not with you when you’re at home. &lt;mark&gt;“Spend hours in your house each day without your phone as your constant companion.”&lt;/mark&gt; This will give you lots of practice doing things without that constant short term motivation to pick up the phone. Put your ringer on and let people know to call you. Make the phone “less desirable by taking off any apps that makes money from your engagement” - social media apps, for example.&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;Oooh this one is good. I can do this right now. I have a charger in the other room and can put my phone there. This would be a really good one to practice. I still need an alternative for having my phone next to my bed - I still use it as my alarm because last summer our phones woke us at midnight for a tornado warning that sent us grabbing the kids and flying to the basement. Maybe I can plug it in further away from the bed and still hear weather alerts. I’ll need to explore this further.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;h3 id=learn-a-hard-skill&gt;Learn a hard skill.&lt;/h3&gt;&lt;p&gt;Cal’s last idea is to master a skill that requires you to focus and get better, but also gets you a clear reward. Take up tennis, playing the guitar, learning to knit, etc. He goes on to say that when you learn a hard skill, it &lt;mark&gt;“builds up a sense of discipline and helps train your long term motivation system that when we focus on something hard, over time we get meaningful rewards.”&lt;/mark&gt; When you practice focusing, it becomes easier to sustain concentration. BUT, you need to do this on a regular schedule, not whenever you feel like it.&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;This is a great one. I’ll have to think about what I’d want to learn! The hard part for me will be sticking with it. I have a terrible habit of picking up new projects every couple of weeks and I need to get on top of that. I need to find something that will sustain me for at least a season or so.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;hr /&gt;
&lt;p&gt;Overall, some really good tips from Cal Newport. I really do think these could help someone step back and assess their cognitive fitness. I know I’ll be implementing some of these ideas.&lt;/p&gt;
&lt;p&gt;Which of these 5 do you see yourself implementing? Which is the easiest? The most difficult? I’d love to hear your thoughts on this.&lt;/p&gt;
&lt;div class="reply-email"&gt;
  &lt;a href="mailto:amandauntangled@proton.me?subject=Re:%20Can You Reverse Brain Rot?"&gt;Reply via email&lt;/a&gt;
&lt;/div&gt;
</description>
      <author>hidden (untangled)</author>
      <guid isPermaLink="false">https://untangled.bearblog.dev/brain-rot/</guid>
      <pubDate>Tue, 28 Apr 2026 00:15:00 +0000</pubDate>
    </item>
    <item>
      <title>first posts are how selves get made</title>
      <link>https://risse.bearblog.dev/first-posts-are-how-selves-get-made/</link>
      <description>&lt;p&gt;i made a choice to introduce myself as risse&lt;/p&gt;
&lt;p&gt;it's a small thing but your name is the first step of an introduction &lt;br /&gt;
and i like how you can pronounce it at least five different ways, &lt;br /&gt;
plus it's not on my birth certificate, so i don't have to correct people when they say it wrong&lt;/p&gt;
&lt;p&gt;&lt;br /&gt;
i put a lot of weight into introductions &lt;br /&gt;
i have this fear that on first pass, people read traits into me that i gotta uphold from then on &lt;br /&gt;
so i put a lot of thought into who i'll come off as &lt;br /&gt;
and i rewrite first posts over and over &lt;br /&gt;
because whether i capitalize my letters says something about the self i can offer them,
and once i've committed i'll be stuck for a while&lt;/p&gt;
&lt;p&gt;i don't know though. it's not high school anymore, i don't need to daydream of a body to escape into &lt;br /&gt;
i should have fun with the identities i act out even if there's nothing truer underneath&lt;/p&gt;
&lt;p&gt;&lt;br /&gt;
so hello !! &amp;nbsp;  i'm risse :) &amp;nbsp; &amp;nbsp; it's nice to meet you&lt;/p&gt;
</description>
      <author>hidden (risse)</author>
      <guid isPermaLink="false">https://risse.bearblog.dev/first-posts-are-how-selves-get-made/</guid>
      <pubDate>Tue, 28 Apr 2026 00:09:00 +0000</pubDate>
    </item>
    <item>
      <title>Spread Hope</title>
      <link>https://blog.absurdpirate.com/spread-hope/</link>
      <description>&lt;p&gt;It's getting dark out there. Fascism is wanting to rear its ugly head, people are getting propagandized to hate each other, we're more stressed and neurotic than ever, shit is getting more expensive, and people are struggling to make ends meet. Instead of despair, I ask you to be a beacon of light in the dark. I want you to help spread hope.&lt;/p&gt;
&lt;p&gt;Hope is found in the smallest of actions. It's found in when we help others without asking. It's when we open an ear and lend a hand. It's feeding and giving to people who have far less than you. It's being a friend. It's volunteering when and where you can. It's helping the sick, the elderly, the young, the frightened.&lt;/p&gt;
&lt;p&gt;I do not say this like it is easy. It's far easier to ignore everything and turn a blind eye to the suffering, but you cannot hide from it forever. So what will you do? Succumb, or embrace your fellow sufferers?&lt;/p&gt;
&lt;p&gt;&lt;a class="previous-post" href="/when-does-collecting-become-overconsumption" title="When Does Collecting Become Overconsumption?"&gt;Previous&lt;/a&gt; | &lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;Reply via email: &lt;a href='mailto:me@absurdpirate.com'&gt;me@absurdpirate.com&lt;/a&gt;&lt;/p&gt;
&lt;/blockquote&gt;
&lt;hr /&gt;
&lt;h3 id=as-of-writing-this&gt;as of writing this...&lt;/h3&gt;&lt;p&gt;I have applied to become a court appointed special advocate for children. I've been putting it off due to time constraints, but I'm in a spot now where I can dedicate the time. Going to a funeral for my wife's grandfather this weekend, we weren't particularly close for a plethora of reasons, so we're mostly showing up to support my MIL.&lt;/p&gt;
</description>
      <author>hidden (absurdpirate)</author>
      <guid isPermaLink="false">https://blog.absurdpirate.com/spread-hope/</guid>
      <pubDate>Thu, 30 Apr 2026 18:42:55 +0000</pubDate>
    </item>
    <item>
      <title>The New Kid on the Block</title>
      <link>https://notes.jeddacp.com/the-new-kid-on-the-block/</link>
      <description>&lt;p&gt;There’s something unsettling about being the &lt;em&gt;new kid on the block&lt;/em&gt;, even as an adult.&lt;/p&gt;
&lt;p&gt;It’s not super obvious, at least not like back then when you were a literal child. No one is pointing or whispering. There’s no cafeteria to navigate or classroom to choose a seat in. The feeling of being in those scenarios show up always, just in a different kind of way. It’s the hesitation before saying something out loud, the extra time spent double (and triple) checking something that would normally feel simple, and the awareness that everyone else &lt;em&gt;seems&lt;/em&gt; to know what they’re doing.&lt;/p&gt;
&lt;p&gt;I started somewhere &lt;em&gt;new&lt;/em&gt; three weeks ago.&lt;/p&gt;
&lt;p&gt;In the grand scheme of things, three weeks isn’t a long time. I get that. If anyone else said that to me, I’d tell them to give themselves more grace, to take their time, to &lt;em&gt;settle in&lt;/em&gt;. As always, in my head, it sounds different. It’s more like: &lt;strong&gt;it’s already week three. You should have a better handle on things by now.&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;I really don’t though. Not &lt;em&gt;fully&lt;/em&gt;.&lt;/p&gt;
&lt;p&gt;There are still gaps here and there. I have moments where I’m trying to piece things together in real time, trying to understand not just what I’m supposed to do, but how everything fits, who to go to, and what hasn’t been said out loud but is somehow expected to be known.&lt;/p&gt;
&lt;p&gt;Sometimes things are handed off without much context, or reference something that hasn’t clicked yet. Not in a &lt;em&gt;bad&lt;/em&gt; way, but just in a way that assumed I’ve already connected &lt;em&gt;all of the dots&lt;/em&gt; that I didn’t even know existed. So I nod, and figure it all out somehow.&lt;/p&gt;
&lt;p&gt;I get it. I’m not supposed to know everything yet. That would be unreasonable (maybe?). But there’s still that perfectionist part of me that expects it anyway. Or at least expects me to &lt;em&gt;anticipate&lt;/em&gt; it, to figure out how to read between the lines and keep up with things I haven’t been introduced to yet. Which, when I say out loud, doesn’t even make much sense. I don’t know how to read people’s minds. I don’t even think I’ve met everyone I’m supposed to meet yet. I still feel behind though.&lt;/p&gt;
&lt;p&gt;I know that a lot of this comes from the version of myself I’m used to being. She’s the one who is steady, capable and already a few steps ahead. There’s a rhythm to that version of me, and right now, I’m all over the place. At this moment, I feel slightly disjointed, a little uncomfortable, full of questions that don’t have immediate answers I can get to just yet.&lt;/p&gt;
&lt;p&gt;Just like with my &lt;a href='/on-making-and-breaking-routines/'&gt;making and breaking routines&lt;/a&gt;, I’m &lt;em&gt;attempting&lt;/em&gt; to sit with it without rushing to just &lt;strong&gt;fix it.&lt;/strong&gt; I’m trying to let myself be new without treating it like a problem. I understand that three weeks is still the beginning, even if it doesn’t feel like it.&lt;/p&gt;
&lt;p&gt;I'm trying to be the new kid on the block that isn't just catching up as quickly as possible (my stress level would probably be lower). I'm hoping to learn to find my place while still figuring things out at the same time.&lt;/p&gt;
&lt;p&gt;&lt;em&gt;...and if you happen to read this, and you’re in that world of mine right now, no, you didn’t just read this.&lt;/em&gt;&lt;/p&gt;
&lt;h3 id=i-am-as-confident-as-can-be&gt;I am as confident as can be.&lt;/h3&gt;&lt;p&gt; 
&lt;details open&gt;
&lt;summary&gt;Comments&lt;/summary&gt;
&lt;p&gt;If you'd like to comment, please send me an email, or sign my &lt;a href='/guestbook/'&gt;Guestbook&lt;/a&gt;.&lt;/p&gt;
&lt;/details&gt;
&lt;h5 id=a-hrefmailtoheyjeddacpmereply-by-emaila&gt;&lt;a href='mailto:hey@jeddacp.me'&gt;Reply by email&lt;/a&gt;&lt;/h5&gt;</description>
      <author>hidden (jedda)</author>
      <guid isPermaLink="false">https://notes.jeddacp.com/the-new-kid-on-the-block/</guid>
      <pubDate>Tue, 28 Apr 2026 16:43:14 +0000</pubDate>
    </item>
    <item>
      <title>Forget about the numbers</title>
      <link>https://robertbirming.com/forget-about-numbers/</link>
      <description>&lt;p&gt;I don't check my Bear stats very often. When I do, it's always the same: ups and downs, ups and downs...&lt;/p&gt;
&lt;p&gt;The spikes occur when a post is trending. When it's gone, the numbers go back down into the peaceful valley.&lt;/p&gt;
&lt;p&gt;I say &lt;em&gt;peaceful&lt;/em&gt; because that's how I feel. I like that it goes up and down. It shows that it's not so much about my blog as a whole, but rather that some posts simply happen to attract more people at certain times.&lt;/p&gt;
&lt;p&gt;Some might say that's what statistics are for, learning what works, coming up with a winning formula. That might work for a product, but it's a recipe for disaster when it comes to a personal blog.&lt;/p&gt;
&lt;p&gt;Blogging according to a winning formula is a great way to lose. You'll lose interest, and so will your readers.&lt;/p&gt;
&lt;p&gt;Forget about the numbers. Remember to be yourself.&lt;/p&gt;
</description>
      <author>hidden (robert)</author>
      <guid isPermaLink="false">https://robertbirming.com/forget-about-numbers/</guid>
      <pubDate>Mon, 27 Apr 2026 17:28:00 +0000</pubDate>
    </item>
  </channel>
</rss>
