When I wrote the first iteration of the Lixie tool about year ago (early 2024), my idea was to identify which logs were boring (most of them), interesting (very few of them) and unknown (not yet classified). At the time I chose not to use ‘AI’ (LLMs) for it, and I am still not that convinced they are the best way to approach that particular problem. Ultimately it boils down to human judgment of what is useful is much more realistic (at least in my context) than what the LLMs ‘know’ (absent fine-tuning and-or extensive example sets which I do not by definition have for my personal logs). After choosing not to use LLMs for it, it was just matching exercise - structured logging messages against an ordered set of rules. ...
How I write notes.. and why?
Over the time my ways of writing notes have evolved. I think writing things down helps me both to retain extended working memory of things I have done over time, as well as process them (sometimes much, much later). I write this blog mainly just to organize my own thoughts too, as opposed to actually writing for an audience (by design, I keep no statistics of readers, and I do not particularly want to interact with hypothetical reader or two that might stumble here, sorry - I believe most of the visitors are AI scraper bots anyway). ...
From Hue (to back) to Home Assistant
Background I think I wrote about this in some earlier blog post too, but I have used various home automation solutions for awhile now. I started out with very, very early Home Assistant builds, not quite sure when, but I contributed little in 2014 to it at least (based on git log). Later in 2014 I started developing my own solution with somewhat different decentralized model ( GitHub - fingon/kodin-henki: ‘Spirit of home’ - my home automation project written in Python 2/3. ), which I used about 5 years and then switched to much less featureful but also less maintenance requiring Philips Hue system. ...
3D printing once more
The last 3d print I did was in August 2023 (spice rack boxes). This time around, I had a need of something to hang Philips Hue motion sensors from. Ideally without making holes in the walls. The exercise took most of the weekend. Or at least, 3d printer was active for a lot of it. I did not spend that much time designing the thing obviously. My 3d printing process I used to use the Fusion 360 design tool but awhile ago I realized that describing geomery is more of a thing for me than drawing it out using a mouse. So I switched to OpenSCAD. Its language is .. an acquired taste, though, so I am solidpython2 to generate the .scad files out of Python script which describes the geometry I want. ...
NVidia L40S - reasonably priced LLM runner in the cloud?
As we are currently doing things in AWS, I wanted to evaluate AWS EC2 g6e.xlarge (32 GB RAM EPYC 4 cores, with 48 GB nvidia L40S GPU), as it seems to be only AWS offering that is even moderately competitive at around 1,8$/hour. The other instance types wind up either with lots of (unneeded) compute compared to GPU, or have ‘large’ number of GPUs, and in general the pricing seems quite depressing compared to their smaller competitors (e.g. https://datacrunch.io/ provides 2 L40S at 1,8$/hour, and also 1 A100 is similarly priced). ...
Finally working modern mesh wireless network at home
TL;DR: Unifi mesh is bad, Orbi is pricey, TP-Link is surprisingly good. Recap (2024 home wifi history) I had Netgear Orbi (75x series) for 4 years (2020-2024) Last summer, I experimented with Unifi (see earlier posts); to put it bluntly, it sucked for mesh use, and I went back to the Orbis The Orbi still did not support wifi 6E which modern Macbook Pros need for more than 1200mbps wifi phy (= more than 600mbps data rate) So, I was on the hunt for more hardware.. New challenger is found In early Black Friday deals in mid-November, I spotted a TP-Link Deco BE65 set at a quite reasonable discount. On the paper, it seemed quite promising. Why is that? ...
M1 Pro vs M4 Max
New work laptop. So of course I had to benchmark its speed at running local LLMs.. These results are the using the default 4 bit quantization, with ollama version 0.4.1. Apple Macbook Pro M1 Pro (32GB RAM) (2021 model) gemma2:9b: eval rate: 24.17 tokens/s gemma2:27b: eval rate: 10.06 tokens/s llama3.2:3b: eval rate: 52.10 tokens/s llama3.1:8b: eval rate: 31.69 tokens/s Apple Macbook Pro M4 Max (36GB RAM) (2024 model) gemma2:9b: eval rate: 46.49 tokens/s gemma2:27b: eval rate: 20.06 tokens/s llama3.2:3b: eval rate: 99.66 tokens/s llama3.1:8b: eval rate: 59.98 tokens/s Conclusions The 2024 laptop roughly twice as fast as the 2021 one, and almost exactly the speed of RTX 3080 (3 years old nvidia GPU) with more VRAM to play with, so quite nice. Still, cloud providers are order of magnitude faster. ...
Pulumi (and pyinfra) at home
As noted in the previous Pulumi post, I had bit too much to write about when describing my current home infrastructure. Due to that, here’s stand-alone post about just that - Pulumi (and pyinfra) at home. Current hobby architecture To give a concrete example of how I am using Pulumi in my current hobby infrastructure, this is a simplified version of my hobby IaC architecture. There is a lot of containers both within and without Kubernetes that I am omitting for clarity from the diagram: fw pyinfra/Pulumi provisioning configures local infrastructure, and oraakkeli Pulumi stack (and two pyinfra configurations) handle my VPSes in Oracle Cloud. ...
DSL (in DSL), or Pulumi?
I have used Terraform professionally and in hobby things every now and then for couple of years now (most recently OpenTofu). I have tolerated it due to the ecosystem (as mentioned in an earlier blog post), but I have never particularly liked it. Why? The reasons are pretty much the same as why I am not a fan of Helm charts either. DSLs are not expressive enough, nor powerful enough Making something ‘human friendly’ (read: huge pile of YAML for devops people) is overrated. The cost of doing that is that automatically validating and formatting it becomes tricky, and the expressed things are mostly too inaccurately defined (‘sure, this is a string, but you are supposed to enter an URL here’). The tooling usually does not help much either, as while programming languages have widespread support in editors, DSLs most of the time do not. Custom configuration languages are not usually much better - being limited by design is not great, nor is it great for integrating with ‘other’ things which use real programming languages. ...
iOS app backend language evaluation - Go or Rust?
I have been looking at how to create an iOS app recently, and more particularly, its backend. SwiftUI as a front-end framework these days is quite lovely, but I am not convinced that Swift ecosystem is really good enough to do backend stuff - either on the device, or especially outside it (although Apple is making baby steps with Embedded Swift). UI on the other hand seems to be the best done with Swift (and notably SwiftUI now). It seems considerably better than Interface Builder based objective C was that I used last time around. ...