23 stories
·
0 followers

The Most Controversial Change In Emacs History – Random Thoughts

1 Share

Just a scant handful of decades after XEmacs introduced a mode line with proportional fonts, we’re thinking about doing the same in Emacs.

Here’s how the mode line looks (by default) in Emacs 28:

Here’s how we’re considering having it look (by default) in Emacs 29:

See? Huge difference. Huge.

The attractive thing about this change is that, well, it’s prettier, but it’s also more consistent with the other elements at the margins of the Emacs frame: The menus and the toolbar have used proportional fonts for a long time, so doing the same in the mode line might also be nice. And you can generally squeeze in more information when using proportional fonts, which is helpful if you’re jamming a lot of stuff into the mode line:

Changing this has been brought up a number of times over the years, but there’s been pushback because some of the elements in the mode line are pretty dynamic, and it’d suck if everything moved around. For instance, when displaying the column number in the mode line, it might be annoying to have the rest of the line shift to the left/right when moving the cursor around in the window.

So we’ve now added a new display spec (called ‘min-width’) today that you can slap around bits of text (and in the mode line) that ensures that the width never decreases beyond a certain point.

Perhaps that’ll make a difference in the level of resistance? I guess we’ll find out, because starting today, we’re doing a one month long test on “master”: This new mode line look is enabled by default now, and in a month we’ll evaluate based on feedback.

So give it a whirl for a few weeks, and vote on emacs-devel mailing list. (And report any glitches, of course. And suggestions for improvements are always welcome.)

Like this:

Like Loading...

Related Articles

Translations are Hard

Yesterday I was delving into the wonderful world of crowd-sourced subtitles, and I was wondering whether TV translations are easy to do. I downloaded the Emacs/mpv-based subed mode and got started. And then stopped immediately, because the mode is really geared towards editing srt files, not writing brand-new ones. You…

In "movies"

Towards a Cleaner Emacs Build

I'm planning on getting back into Emacs development after being mostly absent for a couple of years. One thing that's long annoyed me when tinkering with the Lisp bits of Emacs is the huge number of compilation warnings. The C parts of Emacs were fixed up at least a decade…

In "Emacs"

10×10%

Hey, that took only a month, which means that it's time, once again, to display some Emacs charts. And since this is the tenth post in this series, I thought I'd natter on even more than usual. And perhaps some more about... having some vague goals as being the Emacs…

In "Emacs"

Read the whole story
yqrashawn
5 days ago
reply
Share this story
Delete

Handle Chromium & Firefox sessions with org-mode

1 Share

I was big fan of Session Manager, small addon for Chrome and Chromium that will save all open tabs, assign the name to session and, when is needed, restore it.

Very useful, especially if you are like me, switching between multiple "mind sessions" during the day - research, development or maybe news reading. Or simply, you'd like to remember workflow (and tabs) you had few days ago.

After I decided to ditch all extensions from Chromium except uBlock Origin, it was time to look for alternative. My main goal was it to be browser agnostic and session links had to be stored in text file, so I can enjoy all the goodies of plain text file. What would be better for that than good old org-mode ;)

Long time ago I found this trick: Get the currently open tabs in Google Chrome via the command line and with some elisp sugar and coffee, here is the code:

(require 'cl-lib)

(defun save-chromium-session ()
  "Reads chromium current session and generate org-mode heading with items."
  (interactive)
  (save-excursion
    (let* ((cmd "strings ~/'.config/chromium/Default/Current Session' | 'grep' -E '^https?://' | sort | uniq")
           (ret (shell-command-to-string cmd)))
      (insert
       (concat
        "* "
        (format-time-string "[%Y-%m-%d %H:%M:%S]")
        "\n"
        (mapconcat 'identity
                   (cl-reduce (lambda (lst x)
                                (if (and x (not (string= "" x)))
                                    (cons (concat "  - " x) lst)
                                  lst))
                              (split-string ret "\n")
                              :initial-value (list))
                   "\n"))))))

(defun restore-chromium-session ()
  "Restore session, by openning each link in list with (browse-url).
Make sure to put cursor on date heading that contains list of urls."
  (interactive)
  (save-excursion
    (beginning-of-line)
    (when (looking-at "^\\*")
      (forward-line 1)
      (while (looking-at "^[ ]+-[ ]+\\(http.?+\\)$")
        (let* ((ln (thing-at-point 'line t))
               (ln (replace-regexp-in-string "^[ ]+-[ ]+" "" ln))
               (ln (replace-regexp-in-string "\n" "" ln)))
          (browse-url ln))
        (forward-line 1)))))

So, how does it work?

Evaluate above code, open new org-mode file and call M-x save-chromium-session. It will create something like this:

* [2019-12-04 12:14:02]
  - <a href="https://www.reddit.com/r/emacs/comments/" rel="nofollow">https://www.reddit.com/r/emacs/comments/</a>...
  - <a href="https://www.reddit.com/r/Clojure" rel="nofollow">https://www.reddit.com/r/Clojure</a>
  - <a href="https://news.ycombinator.com" rel="nofollow">https://news.ycombinator.com</a>

or whatever urls are running in Chromium instance. To restore it back, put cursor on desired date and run M-x restore-chromium-session. All tabs should be back.

Here is how I use it, with randomly generated data for the purpose of this text:



* [2019-12-01 23:15:00]...
* [2019-12-02 18:10:20]...
* [2019-12-03 19:00:12]
  - <a href="https://www.reddit.com/r/emacs/comments/" rel="nofollow">https://www.reddit.com/r/emacs/comments/</a>...
  - <a href="https://www.reddit.com/r/Clojure" rel="nofollow">https://www.reddit.com/r/Clojure</a>
  - <a href="https://news.ycombinator.com" rel="nofollow">https://news.ycombinator.com</a>

* [2019-12-04 12:14:02]
  - <a href="https://www.reddit.com/r/emacs/comments/" rel="nofollow">https://www.reddit.com/r/emacs/comments/</a>...
  - <a href="https://www.reddit.com/r/Clojure" rel="nofollow">https://www.reddit.com/r/Clojure</a>
  - <a href="https://news.ycombinator.com" rel="nofollow">https://news.ycombinator.com</a>

Note that hack for reading Chromium session isn't perfect: strings will read whatever looks like string and url from binary database and sometimes that will yield small artifacts in urls. But, you can easily edit those and keep session file lean and clean.

To actually open tabs, elisp code will use browse-url and it can be further customized to run Chromium, Firefox or any other browser with browse-url-browser-function variable. Make sure to read documentation for this variable.

Don't forget to put session file in git, mercurial or svn and enjoy the fact that you will never loose your session history again :)

If you are using Firefox (recent versions) and would like to pull session urls, here is how to do it.

First, download and compile lz4json, small tool that will decompress Mozilla lz4json format, where Firefox stores session data. Session data (at the time of writing this post) is stored in $HOME/.mozilla/firefox/<unique-name>/sessionstore-backups/recovery.jsonlz4.

If Firefox is not running, recovery.jsonlz4 will not be present, but use previous.jsonlz4 instead.

To extract urls, try this in terminal:

$ lz4jsoncat recovery.jsonlz4 | grep -oP '"(http.+?)"' | sed 's/"//g' | sort | uniq

and update save-chromium-session with:

(defun save-chromium-session ()
  "Reads chromium current session and converts it to org-mode chunk."
  (interactive)
  (save-excursion
    (let* ((path "~/.mozilla/firefox/<unique-name>/sessionstore-backups/recovery.jsonlz4")
           (cmd (concat "lz4jsoncat " path " | grep -oP '\"(http.+?)\"' | sed 's/\"//g' | sort | uniq"))
           (ret (shell-command-to-string cmd)))
...

Updating documentation strings, function name and any further refactoring is left for exercise.

Read the whole story
yqrashawn
71 days ago
reply
Share this story
Delete

Org Real

1 Share

The real:// url scheme was based on the <a href="http://" rel="nofollow">http://</a> scheme with some differences.

There is no "host" component; all components in a real URL are treated identically and are called containers. Each container can have a query string, whereas the http scheme can only have one query string at the end of a URL. And finally, spaces are allowed in component names.

real://bathroom cabinet/third shelf?rel=in/razors?rel=above/toothbrush?rel=to the left of

Real links are read from the most general to the most specific, so in this example the bathroom cabinet is the top most component and has a child third shelf with a relationship of "in". The relationship query parameter is in regard to the container immediately to the left, so this tells org-real that the third shelf is in the bathroom cabinet.

Read the whole story
yqrashawn
71 days ago
reply
Share this story
Delete

My road to dark mode

1 Share

There’s a reddit drunk post I read months ago talking about a lot about programming life and many things. There’s one thing I want to talk about.

Dark mode is great until you’re forced to use light mode (webpage or an unsupported app). That’s why I use light mode.

This was so true to me. I used to use dark mode everywhere until I find there are so many webpages that are so bright that it literally hearts my eye with my dark mode monitor brightness. I then turn back to light mode.

My browser has Dark Reader installed many years ago. I don’t use it a lot until recently. I revisited it and found it supports shortcut to toggle the current webpage between dark and light mode. I rebind it to a convenient key and everything just works in browser.

I rebind shortcuts all the time. Why the hell I haven’t done it earlier!

What’s more, there’s a PR for Dark Reader that make its URL matching system more powerful so that it can handle sites with partial dark mode support, like Github main site and Github Marketplace.

You can fork the repo, merge the PR yourself and build a local version right now if you can’t wait.

Read the whole story
yqrashawn
77 days ago
reply
Share this story
Delete

The Rise of Long-Form Generative Art — Tyler Hobbs

1 Share

The New World

Today, platforms like Art Blocks (and in the future, I’m sure many others) allow for something different. The artist creates a generative script (e.g. Fidenza) that is written to the Ethereum blockchain, making it permanent, immutable, and verifiable. Next, the artist specifies how many iterations will be available to be minted by the script. A typical choice is in the 500 to 1000 range. When a collector mints an iteration (i.e. they make a purchase), the script is run to generate a new output, and that output is wrapped in an NFT and transferred directly to the collector. Nobody, including the collector, the platform, or the artist, knows precisely what will be generated when the script is run, so the full range of outputs is a surprise to everyone.

Note the two key differences from earlier forms of generative art. First, the script output goes directly into the hands of the collector, with no opportunity for intervention or curation by the artist. Second, the generative algorithms are expected to create roughly 100x more iterations than before. Both of these have massive implications for the artist. They should also have massive implications for how collectors and critics evaluate the quality of a generative art algorithm.

Analyzing Quality

As with any art form, there are a million unpredictable ways to make something good. Without speaking in absolutes, I'll try to describe what I think are useful characteristics for evaluating whether a long-form generative art program is successful or not, and how this differs from previous (short) forms of generative art.

Fundamentally, with long-form, collectors and viewers become much more familiar with the "output space" of the program. In other words, they have a clear idea of exactly what the program is capable of generating, and how likely it is to generate one output versus another. This was not the case with short-form works, where the output space was either very narrow (sometimes singular) or cherry-picked for the best highlights. By withholding most of the program output, the artist could present a particular, limited view of the algorithm. With long-form works, the artist has nowhere to hide, and collectors will get to know the scope of the algorithm almost as well as the artist.

What are the implications of this? It makes the "average" output from the program crucial. In fact, even the worst outputs are arguably important, because they're just as visible. Before, this bad output could be ignored and discarded. The artist only cared about the top 5% of output, because that's what would make it into the final curated set to be presented to the public. The artist might have been happy to design an algorithm that produced 95% garbage and 5% gems.

Read the whole story
yqrashawn
79 days ago
reply
Share this story
Delete

The Mystery of AS8003 | Kentik

1 Share

On January 20, 2021, a great mystery appeared in the internet’s global routing table. An entity that hadn’t been heard from in over a decade began announcing large swaths of formerly unused IPv4 address space belonging to the U.S. Department of Defense. Registered as GRS-DoD, AS8003 began announcing 11.0.0.0/8 among other large DoD IPv4 ranges.

According to data available from University of Oregon’s Routeviews project, one of the very first BGP messages from AS8003 to the internet was:

TIME: 01/20/21 16:57:35
TYPE: BGP4MP/MESSAGE/Update
FROM: 62.115.128.183 AS1299
TO: 128.223.51.15 AS6447
ORIGIN: IGP
ASPATH: 1299 6939 6939 8003
NEXT_HOP: 62.115.128.183
ANNOUNCE
  11.0.0.0/8

The message above has a timestamp of 16:57 UTC (11:57am ET) on January 20, 2021, moments after the swearing in of Joe Biden as the President of the United States and minutes before the statutory end of the administration of Donald Trump at noon Eastern time.

The questions that started to surface included: Who is AS8003? Why are they announcing huge amounts of IPv4 space belonging to the U.S. Department of Defense? And perhaps most interestingly, why did it come alive within the final three minutes of the Trump administration?

By late January, AS8003 was announcing about 56 million IPv4 addresses, making it the sixth largest AS in the IPv4 global routing table by originated address space. By mid-April, AS8003 dramatically increased the amount of formerly unused DoD address space that it announced to 175 million unique addresses.

Following the increase, AS8003 became, far and away, the largest AS in the history of the internet as measured by originated IPv4 space. By comparison, AS8003 now announces 61 million more IP addresses than the now-second biggest AS in the world, China Telecom, and over 100 million more addresses than Comcast, the largest residential internet provider in the U.S.

In fact, as of April 20, 2021, AS8003 is announcing so much IPv4 space that 5.7% of the entire IPv4 global routing table is presently originated by AS8003. In other words, more than one out of every 20 IPv4 addresses is presently originated by an entity that didn’t even appear in the routing table at the beginning of the year.

A valuable asset

Decades ago, the U.S. Department of Defense was allocated numerous massive ranges of IPv4 address space - after all, the internet was conceived as a Defense Dept project. Over the years, only a portion of that address space was ever utilized (i.e. announced by the DoD on the internet). As the internet grew, the pool of available IPv4 dwindled until a private market emerged to facilitate the sale of what was no longer just a simple router setting, but an increasingly precious commodity.

Even as other nations began purchasing IPv4 as a strategic investment, the DoD sat on much of their unused supply of address space. In 2019, Members of Congress attempted to force the sale of all of the DoD’s IPv4 address space by proposing the following provision be added to the National Defense Authorization Act for 2020:

Sale of Internet Protocol Addresses. Section 1088 would require the Secretary of Defense to sell at fair market value all of the department’s Internet Protocol version 4 (IPv4) addresses over the next 10 years. The proceeds from those sales, after paying for sales transaction costs, would be deposited in the General Fund of the Treasury.

The authors of the proposed legislation used a Congressional Budget Office estimate that a /8 (16.7 million addresses) would fetch $100 million after transaction fees. In the end, it didn’t matter because this provision was stripped from the final bill that was signed into law - the Department of Defense would be funded in 2020 without having to sell this precious internet resource.

What is AS8003 doing?

Last month, astute contributors to the NANOG listserv highlighted the oddity of massive amounts of DoD address space being announced by what appeared to be a shell company. While a BGP hijack was ruled out, the exact purpose was still unclear. Until yesterday when the Department of Defense provided an explanation to reporters from the Washington Post about this unusual internet development. Their statement said:

Defense Digital Service (DDS) authorized a pilot effort advertising DoD Internet Protocol (IP) space using Border Gateway Protocol (BGP). This pilot will assess, evaluate and prevent unauthorized use of DoD IP address space. Additionally, this pilot may identify potential vulnerabilities. This is one of DoD’s many efforts focused on continually improving our cyber posture and defense in response to advanced persistent threats. We are partnering throughout DoD to ensure potential vulnerabilities are mitigated.

I interpret this to mean that the objectives of this effort are twofold. First, to announce this address space to scare off any would-be squatters, and secondly, to collect a massive amount of background internet traffic for threat intelligence.

On the first point, there is a vast world of fraudulent BGP routing out there. As I’ve documented over the years, various types of bad actors use unrouted address space to bypass blocklists in order to send spam and other types of malicious traffic.

On the second, there is a lot of background noise that can be scooped up when announcing large ranges of IPv4 address space. A recent example is Cloudflare’s announcement of 1.1.1.0/24 and 1.0.0.0/24 in 2018.

For decades, internet routing operated with a widespread assumption that ASes didn’t route these prefixes on the internet (perhaps because they were canonical examples from networking textbooks). According to their blog post soon after the launch, Cloudflare received “~10Gbps of unsolicited background traffic” on their interfaces.

And that was just for 512 IPv4 addresses! Of course, those addresses were very special, but it stands to reason that 175 million IPv4 addresses will attract orders of magnitude more traffic. More misconfigured devices and networks that mistakenly assumed that all of this DoD address space would never see the light of day.

Conclusion

While yesterday’s statement from the DoD answers some questions, much remains a mystery. Why did the DoD not just announce this address space themselves instead of directing an outside entity to use the AS of a long dormant email marketing firm? Why did it come to life in the final moments of the previous administration?

We likely won’t get all of the answers anytime soon, but we can certainly hope that the DoD uses the threat intel gleaned from the large amounts of background traffic for the benefit of everyone. Maybe they could come to a NANOG conference and present about the troves of erroneous traffic being sent their way.

As a final note: your corporate network may be using the formerly unused DoD space internally, and if so, there is a risk you could be leaking it out to a party that is actively collecting it. How could you know? Using Kentik’s Data Explorer, you could quickly and easily view the stats of exactly how much data you’re leaking to AS8003. May be worth a check, and if so, start a free trial of Kentik to do so.

We’ve got a short video Tech Talk about this topic as well—see Kentik Tech Talks, Episode 9: How to Use Kentik to Check if You’re Leaking Data to AS8003 (GRS-DoD).

Read the whole story
yqrashawn
80 days ago
reply
Share this story
Delete
Next Page of Stories