22 stories

Handle Chromium & Firefox sessions with org-mode

1 Share

I was big fan of Session Manager, small addon for Chrome and Chromium that will save all open tabs, assign the name to session and, when is needed, restore it.

Very useful, especially if you are like me, switching between multiple "mind sessions" during the day - research, development or maybe news reading. Or simply, you'd like to remember workflow (and tabs) you had few days ago.

After I decided to ditch all extensions from Chromium except uBlock Origin, it was time to look for alternative. My main goal was it to be browser agnostic and session links had to be stored in text file, so I can enjoy all the goodies of plain text file. What would be better for that than good old org-mode ;)

Long time ago I found this trick: Get the currently open tabs in Google Chrome via the command line and with some elisp sugar and coffee, here is the code:

(require 'cl-lib)

(defun save-chromium-session ()
  "Reads chromium current session and generate org-mode heading with items."
    (let* ((cmd "strings ~/'.config/chromium/Default/Current Session' | 'grep' -E '^https?://' | sort | uniq")
           (ret (shell-command-to-string cmd)))
        "* "
        (format-time-string "[%Y-%m-%d %H:%M:%S]")
        (mapconcat 'identity
                   (cl-reduce (lambda (lst x)
                                (if (and x (not (string= "" x)))
                                    (cons (concat "  - " x) lst)
                              (split-string ret "\n")
                              :initial-value (list))

(defun restore-chromium-session ()
  "Restore session, by openning each link in list with (browse-url).
Make sure to put cursor on date heading that contains list of urls."
    (when (looking-at "^\\*")
      (forward-line 1)
      (while (looking-at "^[ ]+-[ ]+\\(http.?+\\)$")
        (let* ((ln (thing-at-point 'line t))
               (ln (replace-regexp-in-string "^[ ]+-[ ]+" "" ln))
               (ln (replace-regexp-in-string "\n" "" ln)))
          (browse-url ln))
        (forward-line 1)))))

So, how does it work?

Evaluate above code, open new org-mode file and call M-x save-chromium-session. It will create something like this:

* [2019-12-04 12:14:02]
  - <a href="https://www.reddit.com/r/emacs/comments/" rel="nofollow">https://www.reddit.com/r/emacs/comments/</a>...
  - <a href="https://www.reddit.com/r/Clojure" rel="nofollow">https://www.reddit.com/r/Clojure</a>
  - <a href="https://news.ycombinator.com" rel="nofollow">https://news.ycombinator.com</a>

or whatever urls are running in Chromium instance. To restore it back, put cursor on desired date and run M-x restore-chromium-session. All tabs should be back.

Here is how I use it, with randomly generated data for the purpose of this text:

* [2019-12-01 23:15:00]...
* [2019-12-02 18:10:20]...
* [2019-12-03 19:00:12]
  - <a href="https://www.reddit.com/r/emacs/comments/" rel="nofollow">https://www.reddit.com/r/emacs/comments/</a>...
  - <a href="https://www.reddit.com/r/Clojure" rel="nofollow">https://www.reddit.com/r/Clojure</a>
  - <a href="https://news.ycombinator.com" rel="nofollow">https://news.ycombinator.com</a>

* [2019-12-04 12:14:02]
  - <a href="https://www.reddit.com/r/emacs/comments/" rel="nofollow">https://www.reddit.com/r/emacs/comments/</a>...
  - <a href="https://www.reddit.com/r/Clojure" rel="nofollow">https://www.reddit.com/r/Clojure</a>
  - <a href="https://news.ycombinator.com" rel="nofollow">https://news.ycombinator.com</a>

Note that hack for reading Chromium session isn't perfect: strings will read whatever looks like string and url from binary database and sometimes that will yield small artifacts in urls. But, you can easily edit those and keep session file lean and clean.

To actually open tabs, elisp code will use browse-url and it can be further customized to run Chromium, Firefox or any other browser with browse-url-browser-function variable. Make sure to read documentation for this variable.

Don't forget to put session file in git, mercurial or svn and enjoy the fact that you will never loose your session history again :)

If you are using Firefox (recent versions) and would like to pull session urls, here is how to do it.

First, download and compile lz4json, small tool that will decompress Mozilla lz4json format, where Firefox stores session data. Session data (at the time of writing this post) is stored in $HOME/.mozilla/firefox/<unique-name>/sessionstore-backups/recovery.jsonlz4.

If Firefox is not running, recovery.jsonlz4 will not be present, but use previous.jsonlz4 instead.

To extract urls, try this in terminal:

$ lz4jsoncat recovery.jsonlz4 | grep -oP '"(http.+?)"' | sed 's/"//g' | sort | uniq

and update save-chromium-session with:

(defun save-chromium-session ()
  "Reads chromium current session and converts it to org-mode chunk."
    (let* ((path "~/.mozilla/firefox/<unique-name>/sessionstore-backups/recovery.jsonlz4")
           (cmd (concat "lz4jsoncat " path " | grep -oP '\"(http.+?)\"' | sed 's/\"//g' | sort | uniq"))
           (ret (shell-command-to-string cmd)))

Updating documentation strings, function name and any further refactoring is left for exercise.

Read the whole story
8 days ago
Share this story

Org Real

1 Share

The real:// url scheme was based on the <a href="http://" rel="nofollow">http://</a> scheme with some differences.

There is no "host" component; all components in a real URL are treated identically and are called containers. Each container can have a query string, whereas the http scheme can only have one query string at the end of a URL. And finally, spaces are allowed in component names.

real://bathroom cabinet/third shelf?rel=in/razors?rel=above/toothbrush?rel=to the left of

Real links are read from the most general to the most specific, so in this example the bathroom cabinet is the top most component and has a child third shelf with a relationship of "in". The relationship query parameter is in regard to the container immediately to the left, so this tells org-real that the third shelf is in the bathroom cabinet.

Read the whole story
8 days ago
Share this story

My road to dark mode

1 Share

There’s a reddit drunk post I read months ago talking about a lot about programming life and many things. There’s one thing I want to talk about.

Dark mode is great until you’re forced to use light mode (webpage or an unsupported app). That’s why I use light mode.

This was so true to me. I used to use dark mode everywhere until I find there are so many webpages that are so bright that it literally hearts my eye with my dark mode monitor brightness. I then turn back to light mode.

My browser has Dark Reader installed many years ago. I don’t use it a lot until recently. I revisited it and found it supports shortcut to toggle the current webpage between dark and light mode. I rebind it to a convenient key and everything just works in browser.

I rebind shortcuts all the time. Why the hell I haven’t done it earlier!

What’s more, there’s a PR for Dark Reader that make its URL matching system more powerful so that it can handle sites with partial dark mode support, like Github main site and Github Marketplace.

You can fork the repo, merge the PR yourself and build a local version right now if you can’t wait.

Read the whole story
13 days ago
Share this story

The Rise of Long-Form Generative Art — Tyler Hobbs

1 Share

The New World

Today, platforms like Art Blocks (and in the future, I’m sure many others) allow for something different. The artist creates a generative script (e.g. Fidenza) that is written to the Ethereum blockchain, making it permanent, immutable, and verifiable. Next, the artist specifies how many iterations will be available to be minted by the script. A typical choice is in the 500 to 1000 range. When a collector mints an iteration (i.e. they make a purchase), the script is run to generate a new output, and that output is wrapped in an NFT and transferred directly to the collector. Nobody, including the collector, the platform, or the artist, knows precisely what will be generated when the script is run, so the full range of outputs is a surprise to everyone.

Note the two key differences from earlier forms of generative art. First, the script output goes directly into the hands of the collector, with no opportunity for intervention or curation by the artist. Second, the generative algorithms are expected to create roughly 100x more iterations than before. Both of these have massive implications for the artist. They should also have massive implications for how collectors and critics evaluate the quality of a generative art algorithm.

Analyzing Quality

As with any art form, there are a million unpredictable ways to make something good. Without speaking in absolutes, I'll try to describe what I think are useful characteristics for evaluating whether a long-form generative art program is successful or not, and how this differs from previous (short) forms of generative art.

Fundamentally, with long-form, collectors and viewers become much more familiar with the "output space" of the program. In other words, they have a clear idea of exactly what the program is capable of generating, and how likely it is to generate one output versus another. This was not the case with short-form works, where the output space was either very narrow (sometimes singular) or cherry-picked for the best highlights. By withholding most of the program output, the artist could present a particular, limited view of the algorithm. With long-form works, the artist has nowhere to hide, and collectors will get to know the scope of the algorithm almost as well as the artist.

What are the implications of this? It makes the "average" output from the program crucial. In fact, even the worst outputs are arguably important, because they're just as visible. Before, this bad output could be ignored and discarded. The artist only cared about the top 5% of output, because that's what would make it into the final curated set to be presented to the public. The artist might have been happy to design an algorithm that produced 95% garbage and 5% gems.

Read the whole story
15 days ago
Share this story

The Mystery of AS8003 | Kentik

1 Share

On January 20, 2021, a great mystery appeared in the internet’s global routing table. An entity that hadn’t been heard from in over a decade began announcing large swaths of formerly unused IPv4 address space belonging to the U.S. Department of Defense. Registered as GRS-DoD, AS8003 began announcing among other large DoD IPv4 ranges.

According to data available from University of Oregon’s Routeviews project, one of the very first BGP messages from AS8003 to the internet was:

TIME: 01/20/21 16:57:35
FROM: AS1299
TO: AS6447
ASPATH: 1299 6939 6939 8003

The message above has a timestamp of 16:57 UTC (11:57am ET) on January 20, 2021, moments after the swearing in of Joe Biden as the President of the United States and minutes before the statutory end of the administration of Donald Trump at noon Eastern time.

The questions that started to surface included: Who is AS8003? Why are they announcing huge amounts of IPv4 space belonging to the U.S. Department of Defense? And perhaps most interestingly, why did it come alive within the final three minutes of the Trump administration?

By late January, AS8003 was announcing about 56 million IPv4 addresses, making it the sixth largest AS in the IPv4 global routing table by originated address space. By mid-April, AS8003 dramatically increased the amount of formerly unused DoD address space that it announced to 175 million unique addresses.

Following the increase, AS8003 became, far and away, the largest AS in the history of the internet as measured by originated IPv4 space. By comparison, AS8003 now announces 61 million more IP addresses than the now-second biggest AS in the world, China Telecom, and over 100 million more addresses than Comcast, the largest residential internet provider in the U.S.

In fact, as of April 20, 2021, AS8003 is announcing so much IPv4 space that 5.7% of the entire IPv4 global routing table is presently originated by AS8003. In other words, more than one out of every 20 IPv4 addresses is presently originated by an entity that didn’t even appear in the routing table at the beginning of the year.

A valuable asset

Decades ago, the U.S. Department of Defense was allocated numerous massive ranges of IPv4 address space - after all, the internet was conceived as a Defense Dept project. Over the years, only a portion of that address space was ever utilized (i.e. announced by the DoD on the internet). As the internet grew, the pool of available IPv4 dwindled until a private market emerged to facilitate the sale of what was no longer just a simple router setting, but an increasingly precious commodity.

Even as other nations began purchasing IPv4 as a strategic investment, the DoD sat on much of their unused supply of address space. In 2019, Members of Congress attempted to force the sale of all of the DoD’s IPv4 address space by proposing the following provision be added to the National Defense Authorization Act for 2020:

Sale of Internet Protocol Addresses. Section 1088 would require the Secretary of Defense to sell at fair market value all of the department’s Internet Protocol version 4 (IPv4) addresses over the next 10 years. The proceeds from those sales, after paying for sales transaction costs, would be deposited in the General Fund of the Treasury.

The authors of the proposed legislation used a Congressional Budget Office estimate that a /8 (16.7 million addresses) would fetch $100 million after transaction fees. In the end, it didn’t matter because this provision was stripped from the final bill that was signed into law - the Department of Defense would be funded in 2020 without having to sell this precious internet resource.

What is AS8003 doing?

Last month, astute contributors to the NANOG listserv highlighted the oddity of massive amounts of DoD address space being announced by what appeared to be a shell company. While a BGP hijack was ruled out, the exact purpose was still unclear. Until yesterday when the Department of Defense provided an explanation to reporters from the Washington Post about this unusual internet development. Their statement said:

Defense Digital Service (DDS) authorized a pilot effort advertising DoD Internet Protocol (IP) space using Border Gateway Protocol (BGP). This pilot will assess, evaluate and prevent unauthorized use of DoD IP address space. Additionally, this pilot may identify potential vulnerabilities. This is one of DoD’s many efforts focused on continually improving our cyber posture and defense in response to advanced persistent threats. We are partnering throughout DoD to ensure potential vulnerabilities are mitigated.

I interpret this to mean that the objectives of this effort are twofold. First, to announce this address space to scare off any would-be squatters, and secondly, to collect a massive amount of background internet traffic for threat intelligence.

On the first point, there is a vast world of fraudulent BGP routing out there. As I’ve documented over the years, various types of bad actors use unrouted address space to bypass blocklists in order to send spam and other types of malicious traffic.

On the second, there is a lot of background noise that can be scooped up when announcing large ranges of IPv4 address space. A recent example is Cloudflare’s announcement of and in 2018.

For decades, internet routing operated with a widespread assumption that ASes didn’t route these prefixes on the internet (perhaps because they were canonical examples from networking textbooks). According to their blog post soon after the launch, Cloudflare received “~10Gbps of unsolicited background traffic” on their interfaces.

And that was just for 512 IPv4 addresses! Of course, those addresses were very special, but it stands to reason that 175 million IPv4 addresses will attract orders of magnitude more traffic. More misconfigured devices and networks that mistakenly assumed that all of this DoD address space would never see the light of day.


While yesterday’s statement from the DoD answers some questions, much remains a mystery. Why did the DoD not just announce this address space themselves instead of directing an outside entity to use the AS of a long dormant email marketing firm? Why did it come to life in the final moments of the previous administration?

We likely won’t get all of the answers anytime soon, but we can certainly hope that the DoD uses the threat intel gleaned from the large amounts of background traffic for the benefit of everyone. Maybe they could come to a NANOG conference and present about the troves of erroneous traffic being sent their way.

As a final note: your corporate network may be using the formerly unused DoD space internally, and if so, there is a risk you could be leaking it out to a party that is actively collecting it. How could you know? Using Kentik’s Data Explorer, you could quickly and easily view the stats of exactly how much data you’re leaking to AS8003. May be worth a check, and if so, start a free trial of Kentik to do so.

We’ve got a short video Tech Talk about this topic as well—see Kentik Tech Talks, Episode 9: How to Use Kentik to Check if You’re Leaking Data to AS8003 (GRS-DoD).

Read the whole story
16 days ago
Share this story

Thoughts on Clojure UI framework

1 Comment

I had a long‑standing dream: to implement a UI framework. Nothing inspires me more than noticing hundreds of subtle interactions (e.g. text selection in a text box) and seeing how combined, they bring together a feel of an alive and native component.

For a long time, I thought it’s a Leviathan task: something for a hundred‑people team and tens of years. But then Flutter came along and showed that it’s actually very feasible to re‑implement from scratch the entirety of platform UI to the very last detail.

After that, I joined JetBrains and worked not on one, but two different UI frameworks, which, again, turned out to be a very doable task for a small team.

Lately, Clojurists Together and Roam Research agreed to sponsor this work. I just can’t keep ignoring the signs the Universe is sending me.

I have no framework code yet, but some foundations are laid out in Skija and JWM.

This post is my thoughts on the subject, some opinions I have and some questions I am not sure how to answer. It’s aimed to facilitate discussion, so please, share your thoughts!

Why cross‑platform

The very same CPU, memory, and graphics card have no problem executing Windows, Linux, or macOS apps. They don’t care. Yet you can’t run a macOS app on Windows, Linux on macOS, or Windows on Linux. This is not a fundamental property of software, it’s a stupid historical mistake and we should work as hard as we can to correct it.

Why desktop apps

Despite mobile dominance, desktop remains important for professional and productivity apps, and to build them we need tools.

Mobile has Flutter, Compose, SwiftUI, and desktop is… less advanced. Especially cross‑platform desktop.

Why not mobile and desktop together

They’re too different. I can’t imagine writing a single app that works both on desktop AND on mobile and is good at both. Two different UIs — sure, but in that case, mobile is already pretty well covered.

I plan to focus on the high‑quality desktop instead of finding a mediocre middle ground between desktop and mobile.

Why not web

The web started to get more suitable for apps probably since Gmail in 2004. It’s very convenient, yet still very limited and has shortcomings.

I think the fundamental problem of the web is that its APIs are too high‑level. Without low‑level APIs, simple things are sometimes stupidly hard to do, and you have to undo the stuff web is doing for you to get what you need. This is backward.

To top this off, the web is also too bloated, too unstable, OS integrations are too limited, and it puts a very hard limit on your performance.

In other words, the web is good for simple stuff, but not for anything complex because of lack of control.

And you don’t get to choose a programming language :(

Why not Electron?

Electron is pretty much web, but with better OS integrations. But it’s still full of compromises: you don’t have threads, you’re stuck with JavaScript, can’t do your own layout, render smooth 144Hz animations, etc. It adds 150Mb to your app package and makes it a memory hog.

Yet it has been a massive success. I think it means our industry is craving for a good desktop UI framework. Electron is a great step in the right direction but not the final form.

Native or custom

There are two classes of cross‑platform UI frameworks: ones that try to wrap native widgets (SWT, QT) and ones that draw everything themselves (Swing, JavaFX, Flutter).

I don’t really see a choice here, because I’ve never seen native widgets wrapped in cross‑platform abstractions that work. They always get a million little details wrong and still don’t feel native. QT might look decent on some Linux DEs but falls apart on other systems.

QT delivering a mix of native and custom widgets, mistreating native and creating a visual mess

On the other hand, the web has taught people not to care for OS‑native look and feel. Your app will be accepted on all three platforms no matter which fonts, colors, and button shapes you use, as long as it looks and feels good.

Why new framework

Well, I want to build UI in Clojure, and Clojure is limited to what Java has: Swing/AWT or JavaFX.

Swing/AWT is very old, has lots of shortcomings, and modern problems are solved on top of the old APIs. The downside of being in Java Core is that you can’t really evolve as the world changes because you can’t break or remove things.

JavaFX has learned a lot from Swing mistakes but has very limited graphics APIs and weird HiDPI and ClearType rendering issues.

Finally, declarative frameworks seem to be a good idea, but neither Swing nor JavaFX is declarative. There’s cljfx which is declarative but it’s based on JavaFX widgets and I don’t want to use those.

Why Clojure

Finally, the biggest reason I think this is a worthy idea is Clojure itself. Having worked on UIs in ClojureScript + Figwheel, live reload is a blessing, and Clojure has even a better story there. REPL + live reload + declarative UI framework is a match made in heaven. Anything else will have to try really hard to get even close to this combination.

Tweak and reuse

The web’s solution to customization is that each button has hundreds of properties you can tweak. You can set background, gradients, border radii, but if you want tricky behavior, you are out of luck. On the other hand, if you don’t want any of that, you still have to bear the weight and complexity of 100 default properties.

I am thinking of another way of approaching things. Somewhere deep in the Compose internals I once saw something like this (not verbatim):

fun MaterialButton(text) {
    Hoverable {
        Clickable {
            RoundedRectangleClip {
                RippleEffect {
                    SolidFill {
                        Padding {

(and they say Lisps have too many parentheses)

What struck me here was that:

a. Internals are perfectly composable with each other, and

b. It’s trivial to write your own button!

If I don’t want different corners, I just write my own button, using 6 out of 7 existing components and only replacing RoundedRectangleClip with my own implementation. Want gradient? Replace SolidFill with GradientFill, but keep the rest!

This creates a great benefit both for the library (built‑in buttons don’t need hundreds of properties to satisfy everyone) and for the users (they can meaningfully reuse parts from the standard library and only replace parts they don’t like).

Call it a Lego model, if you will. Perfectly composable and reusable chunks and an invitation to play with it.

First‑class rendering access

At some point, somebody has to draw the actual button. I think it would be great to have direct and first‑class access to override the rendering of anything and draw what you need directly. I can spend all day guessing what type of rounded corners you might need, or can give you the Skia canvas and let you do what you want.

Nothing hurts me more than seeing people try to render diagonals with “creative” use of border‑width. It just feels wrong:

The same goes for the layout, by the way.

Declarative model

I’ve been on board with React VDOM approach since 2014 when it first made its way into the Clojure community. I think it’s a fantastic model and a huge breakthrough in how we build UIs.

I also think it works twice as good with an immutable and data‑oriented language like Clojure, where you can load in and out parts of your application, keep the state and see changes update the UI live without reloading.

The approach is seeing even more adoption now, as Flutter, Compose and SwiftUI joined the hype train. And I don’t see the reason not to go in that direction either.

Why not immediate mode

Immediate mode sounds great in terms of simplicity and speed of development. Unfortunately, it has fundamental problems with layout (you only get one pass and can’t know the size of your content in advance), so retained mode it is.

I’d love to get as close as possible to the simplicity of it, though.


I find the particular approach Compose has chosen a step in the wrong direction.

In Compose, you don’t pass components around. Instead, you call side‑effecting functions that modify the global state somewhere.

var button = Button("Press me") // <- button already added to the dom
List(button) // <- impossible

This approach means that if someone has created a button, you can’t “hold onto it”: you can’t log, inspect, modify, throw it away. The shape of your code defines the resulting UI tree, not the shape of your values. And programming languages have been optimizing value manipulation, not code manipulation.

In contrast, in React components are values that you can save, pass and do whatever you want. That’s the approach I like.


All ClojureScript‑React wrappers have to transform Clojure values to React values somehow. The beauty of building a new UI framework is that it can be Clojure‑native and completely skip this step: stuff like Hiccup could be interpreted directly.


I have three main inspirations here. The first one is a one‑pass layout algorithm from Flutter:

It’s a brilliant system: simple, performant, easy to understand, easy to extend/hack around. This is important because we want people to intercept control and build their own layout logic for the components that are different enough from the components that will be shipped.

The second is Subform layout, which taught me that layout system can be beautiful and symmetric, the same units can be used for everything and that flexbox is not the pinnacle of human engineering.

The third is a notion that parents should position children, and spacing is a part of the parent’s layout, thus margins are considered harmful.

Defaults are part of the API

SwiftUI is notorious for shipping components that change defaults depending on the OS version and context they are used in.

What can you build if your foundation is unsteady? Not much. The approach SwiftUI is taking is to ask developers to update their apps every year.

We don’t play this way in Clojure. In Clojure, we want people to build apps that last tens of years without a single change.

The solution is simple: if we commit to some defaults (e.g. paddings, line heights, colors), you can consider them to be part of the stable API.

Text handling

Typography is constrained the most in the existing Java UI frameworks: there simply is no place in the API to specify a stylistic set or load a variable font.

Since I’ve worked with fonts a little bit, I am eager to include high-quality modern typography into the new framework.

At the very minimum, I want to get these things:

The latter means font size and line-height will probably work a little different than they do on the web, but it will be much easier to correctly and reliably center a text label inside a button.

Being pedantic

As a pedantic person, I want every little detail to be right. I believe that even small discrepancies can communicate a feeling of a poorly made, unreliable product.

Stuff I aim to get right:

  • UI scaling & pixel grid. No blurry lines even on fractional UI scales.
  • VSync. Getting refresh rate correct and synchronized with the monitor.
  • Color spaces. Believe it or not, it’s the responsibility of the app to render in the monitor color space. With the popularity of Macs, P3, and HDR external monitors, things are not as simple as they used to be in sRGB days anymore.
  • Multi‑monitor. Each can have its own scale/refresh rate/color profile.


Getting smooth animations is very important. I am aiming at 144 Hz, as this seems to be the most common “above 60” refresh rate for the next few years. No concrete plan here, just “don’t do stupid things and measure performance often” for now.

Startup time

The startup time of Clojure is bothering me, but maybe compiling to GraalVM or bundling a custom Clojure runtime could help — remains to be seen. There’s hope.

Full package

I imagine everything will come together as some sort of Electron but for JVM.

The goal is this: write your app in Clojure, package and distribute it as any other native desktop app.

Definition of done: nobody can tell the app is written in Clojure.

Your feedback

This is just a dump of everything that is in my head right now. Nothing is final, many things are vague, everything could change.

That’s why I want to start a discussion:

  • What do you expect from a desktop UI framework for Clojure?
  • What would you build?
  • What will not work for you?
  • Where am I wrong?
  • Do you have a better insight into any of the topics?
  • Did I miss something important that should be covered?

Share your thoughts! Reply on Twitter or drop me a letter.

Read the whole story
18 days ago
looking forward to it
Share this story
Next Page of Stories