26 stories
·
0 followers

Moxie Marlinspike >> Blog >> My first impressions of web3

1 Comment and 7 Shares

Despite considering myself a cryptographer, I have not found myself particularly drawn to “crypto.” I don’t think I’ve ever actually said the words “get off my lawn,” but I’m much more likely to click on Pepperidge Farm Remembers flavored memes about how “crypto” used to mean “cryptography” than I am the latest NFT drop.

Also – cards on the table here – I don’t share the same generational excitement for moving all aspects of life into an instrumented economy.

Even strictly on the technological level, though, I haven’t yet managed to become a believer. So given all of the recent attention into what is now being called web3, I decided to explore some of what has been happening in that space more thoroughly to see what I may be missing.

How I think about 1 and 2

web3 is a somewhat ambiguous term, which makes it difficult to rigorously evaluate what the ambitions for web3 should be, but the general thesis seems to be that web1 was decentralized, web2 centralized everything into platforms, and that web3 will decentralize everything again. web3 should give us the richness of web2, but decentralized.

It’s probably good to have some clarity on why centralized platforms emerged to begin with, and in my mind the explanation is pretty simple:

  1. People don’t want to run their own servers, and never will. The premise for web1 was that everyone on the internet would be both a publisher and consumer of content as well as a publisher and consumer of infrastructure.
    We’d all have our own web server with our own web site, our own mail server for our own email, our own finger server for our own status messages, our own chargen server for our own character generation. However – and I don’t think this can be emphasized enough – that is not what people want. People do not want to run their own servers.
    Even nerds do not want to run their own servers at this point. Even organizations building software full time do not want to run their own servers at this point. If there’s one thing I hope we’ve learned about the world, it’s that people do not want to run their own servers. The companies that emerged offering to do that for you instead were successful, and the companies that iterated on new functionality based on what is possible with those networks were even more successful.
  2. A protocol moves much more slowly than a platform. After 30+ years, email is still unencrypted; meanwhile WhatsApp went from unencrypted to full e2ee in a year. People are still trying to standardize sharing a video reliably over IRC; meanwhile, Slack lets you create custom reaction emoji based on your face.
    This isn’t a funding issue. If something is truly decentralized, it becomes very difficult to change, and often remains stuck in time. That is a problem for technology, because the rest of the ecosystem is moving very quickly, and if you don’t keep up you will fail. There are entire parallel industries focused on defining and improving methodologies like Agile to try to figure out how to organize enormous groups of people so that they can move as quickly as possible because it is so critical.
    When the technology itself is more conducive to stasis than movement, that’s a problem. A sure recipe for success has been to take a 90’s protocol that was stuck in time, centralize it, and iterate quickly.

But web3 intends to be different, so let’s take a look. In order to get a quick feeling for the space and a better understanding for what the future may hold, I decided to build a couple of dApps and create an NFT.

Making some distributed apps

To get a feeling for the web3 world, I made a dApp called Autonomous Art that lets anyone mint a token for an NFT by making a visual contribution to it. The cost of making a visual contribution increases over time, and the funds a contributor pays to mint are distributed to all previous artists (visualizing this financial structure would resemble something similar to a pyramid shape). At the time of this writing, over $38k USD has gone into creating this collective art piece.

I also made a dApp called First Derivative that allows you to create, discover, and exchange NFT derivatives which track an underlying NFT, similar to financial derivatives which track an underlying asset 😉.

Both gave me a feeling for how the space works. To be clear, there is nothing particularly “distributed” about the apps themselves: they’re just normal react websites. The “distributedness” refers to where the state and the logic/permissions for updating the state lives: on the blockchain instead of in a “centralized” database.

One thing that has always felt strange to me about the cryptocurrency world is the lack of attention to the client/server interface. When people talk about blockchains, they talk about distributed trust, leaderless consensus, and all the mechanics of how that works, but often gloss over the reality that clients ultimately can’t participate in those mechanics. All the network diagrams are of servers, the trust model is between servers, everything is about servers. Blockchains are designed to be a network of peers, but not designed such that it’s really possible for your mobile device or your browser to be one of those peers.

With the shift to mobile, we now live firmly in a world of clients and servers – with the former completely unable to act as the latter – and those questions seem more important to me than ever. Meanwhile, ethereum actually refers to servers as “clients,” so there’s not even a word for an actual untrusted client/server interface that will have to exist somewhere, and no acknowledgement that if successful there will ultimately be billions (!) more clients than servers.

For example, whether it’s running on mobile or the web, a dApp like Autonomous Art or First Derivative needs to interact with the blockchain somehow – in order to modify or render state (the collectively produced work of art, the edit history for it, the NFT derivatives, etc). That’s not really possible to do from the client, though, since the blockchain can’t live on your mobile device (or in your desktop browser realistically). So the only alternative is to interact with the blockchain via a node that’s running remotely on a server somewhere.

A server! But, as we know, people don’t want to run their own servers. As it happens, companies have emerged that sell API access to an ethereum node they run as a service, along with providing analytics, enhanced APIs they’ve built on top of the default ethereum APIs, and access to historical transactions. Which sounds… familiar. At this point, there are basically two companies. Almost all dApps use either Infura or Alchemy in order to interact with the blockchain. In fact, even when you connect a wallet like MetaMask to a dApp, and the dApp interacts with the blockchain via your wallet, MetaMask is just making calls to Infura!

These client APIs are not using anything to verify blockchain state or the authenticity of responses. The results aren’t even signed. An app like Autonomous Art says “hey what’s the output of this view function on this smart contract,” Alchemy or Infura responds with a JSON blob that says “this is the output,” and the app renders it.

This was surprising to me. So much work, energy, and time has gone into creating a trustless distributed consensus mechanism, but virtually all clients that wish to access it do so by simply trusting the outputs from these two companies without any further verification. It also doesn’t seem like the best privacy situation. Imagine if every time you interacted with a website in Chrome, your request first went to Google before being routed to the destination and back. That’s the situation with ethereum today. All write traffic is obviously already public on the blockchain, but these companies also have visibility into almost all read requests from almost all users in almost all dApps.

Partisans of the blockchain might say that it’s okay if these types of centralized platforms emerge, because the state itself is available on the blockchain, so if these platforms misbehave clients can simply move elsewhere. However, I would suggest that this is a very simplistic view of the dynamics that make platforms what they are.

Let me give you an example.

Making an NFT

I also wanted to create a more traditional NFT. Most people think of images and digital art when they think of NFTs, but NFTs generally do not store that data on-chain. For most NFTs of most images, that would be much too expensive.

Instead of storing the data on-chain, NFTs instead contain a URL that points to the data. What surprised me about the standards was that there’s no hash commitment for the data located at the URL. Looking at many of the NFTs on popular marketplaces being sold for tens, hundreds, or millions of dollars, that URL often just points to some VPS running Apache somewhere. Anyone with access to that machine, anyone who buys that domain name in the future, or anyone who compromises that machine can change the image, title, description, etc for the NFT to whatever they’d like at any time (regardless of whether or not they “own” the token). There’s nothing in the NFT spec that tells you what the image “should” be, or even allows you to confirm whether something is the “correct” image.

So as an experiment, I made an NFT that changes based on who is looking at it, since the web server that serves the image can choose to serve different images based on the IP or User Agent of the requester. For example, it looked one way on OpenSea, another way on Rarible, but when you buy it and view it from your crypto wallet, it will always display as a large 💩 emoji. What you bid on isn’t what you get. There’s nothing unusual about this NFT, it’s how the NFT specifications are built. Many of the highest priced NFTs could turn into 💩 emoji at any time; I just made it explicit.

After a few days, without warning or explanation, the NFT I made was removed from OpenSea (an NFT marketplace):

The takedown suggests that I violated some Term Of Service, but after reading the terms, I don’t see any that prohibit an NFT which changes based on where it is being looked at from, and I was openly describing it that way.

What I found most interesting, though, is that after OpenSea removed my NFT, it also no longer appeared in any crypto wallet on my device. This is web3, though, how is that possible?

A crypto wallet like MetaMask, Rainbow, etc is “non-custodial” (the keys are kept client side), but it has the same problem as my dApps above: a wallet has to run on a mobile device or in your browser. Meanwhile, ethereum and other blockchains have been designed with the idea that it’s a network of peers, but not designed such that it’s really possible for your mobile device or your browser to be one of those peers.

A wallet like MetaMask needs to do basic things like display your balance, your recent transactions, and your NFTs, as well as more complex things like constructing transactions, interacting with smart contracts, etc. In short, MetaMask needs to interact with the blockchain, but the blockchain has been built such that clients like MetaMask can’t interact with it. So like my dApp, MetaMask accomplishes this by making API calls to three companies that have consolidated in this space.

For instance, MetaMask displays your recent transactions by making an API call to etherscan:

GET <a href="https://api.etherscan.io/api?module=account&address=0x0208376c899fdaEbA530570c008C4323803AA9E8&offset=40&order=desc&action=txlist&tag=latest&page=1" rel="nofollow">https://api.etherscan.io/api?module=account&address=0x0208376c899fdaEbA530570c008C4323803AA9E8&offset=40&order=desc&action=txlist&tag=latest&page=1</a> HTTP/2.0                                                          

…displays your account balance by making an API call to Infura:

POST <a href="https://mainnet.infura.io/v3/d039103314584a379e33c21fbe89b6cb" rel="nofollow">https://mainnet.infura.io/v3/d039103314584a379e33c21fbe89b6cb</a> HTTP/2.0

{
    "id": 2628746552039525,
    "jsonrpc": "2.0",
    "method": "eth_getBalance",
    "params": [
        "0x0208376c899fdaEbA530570c008C4323803AA9E8",
        "latest"
    ]
}

…displays your NFTs by making an API call to OpenSea:

GET <a href="https://api.opensea.io/api/v1/assets?owner=0x0208376c899fdaEbA530570c008C4323803AA9E8&offset=0&limit=50" rel="nofollow">https://api.opensea.io/api/v1/assets?owner=0x0208376c899fdaEbA530570c008C4323803AA9E8&offset=0&limit=50</a> HTTP/2.0                                                                                               

Again, like with my dApp, these responses are not authenticated in some way. They’re not even signed so that you could later prove they were lying. It reuses the same connections, TLS session tickets, etc for all the accounts in your wallet, so if you’re managing multiple accounts in your wallet to maintain some identity separation, these companies know they’re linked.

MetaMask doesn’t actually do much, it’s just a view onto data provided by these centralized APIs. This isn’t a problem specific to MetaMask – what other option do they have? Rainbow, etc are set up in exactly the same way. (Interestingly, Rainbow has their own data for the social features they’re building into their wallet – social graph, showcases, etc – and have chosen to build all of that on top of Firebase instead of the blockchain.)

All this means that if your NFT is removed from OpenSea, it also disappears from your wallet. It doesn’t functionally matter that my NFT is indelibly on the blockchain somewhere, because the wallet (and increasingly everything else in the ecosystem) is just using the OpenSea API to display NFTs, which began returning 304 No Content for the query of NFTs owned by my address!

Recreating this world

Given the history of why web1 became web2, what seems strange to me about web3 is that technologies like ethereum have been built with many of the same implicit trappings as web1. To make these technologies usable, the space is consolidating around… platforms. Again. People who will run servers for you, and iterate on the new functionality that emerges. Infura, OpenSea, Coinbase, Etherscan.

Likewise, the web3 protocols are slow to evolve. When building First Derivative, it would have been great to price minting derivatives as a percentage of the underlying’s value. That data isn’t on chain, but it’s in an API that OpenSea will give you. People are excited about NFT royalties for the way that can benefit creators, but royalties aren’t specified in ERC-721, and it’s too late to change it, so OpenSea has its own way of configuring royalties that exists in web2 space. Iterating quickly on centralized platforms is already outpacing the distributed protocols and consolidating control into platforms.

Given those dynamics, I don’t think it should be a surprise that we’re already at a place where your crypto wallet’s view of your NFTs is OpenSea’s view of your NFTs. I don’t think we should be surprised that OpenSea isn’t a pure “view” that can be replaced, since it has been busy iterating the platform beyond what is possible strictly with the impossible/difficult to change standards.

I think this is very similar to the situation with email. I can run my own mail server, but it doesn’t functionally matter for privacy, censorship resistance, or control – because GMail is going to be on the other end of every email that I send or receive anyway. Once a distributed ecosystem centralizes around a platform for convenience, it becomes the worst of both worlds: centralized control, but still distributed enough to become mired in time. I can build my own NFT marketplace, but it doesn’t offer any additional control if OpenSea mediates the view of all NFTs in the wallets people use (and every other app in the ecosystem).

This isn’t a complaint about OpenSea or an indictment of what they’ve built. Just the opposite, they’re trying to build something that works. I think we should expect this kind of platform consolidation to happen, and given the inevitability, design systems that give us what we want when that’s how things are organized. My sense and concern, though, is that the web3 community expects some other outcome than what we’re already seeing.

It’s early days

“It’s early days still” is the most common refrain I see from people in the web3 space when discussing matters like these. In some ways, cryptocurrency’s failure to scale beyond relatively nascent engineering is what makes it possible to consider the days “early,” since objectively it has already been a decade or more.

However, even if this is just the beginning (and it very well might be!), I’m not sure we should consider that any consolation. I think the opposite might be true; it seems like we should take notice that from the very beginning, these technologies immediately tended towards centralization through platforms in order for them to be realized, that this has ~zero negatively felt effect on the velocity of the ecosystem, and that most participants don’t even know or care it’s happening. This might suggest that decentralization itself is not actually of immediate practical or pressing importance to the majority of people downstream, that the only amount of decentralization people want is the minimum amount required for something to exist, and that if not very consciously accounted for, these forces will push us further from rather than closer to the ideal outcome as the days become less early.

But you can’t stop a gold rush

When you think about it, OpenSea would actually be much “better” in the immediate sense if all the web3 parts were gone. It would be faster, cheaper for everyone, and easier to use. For example, to accept a bid on my NFT, I would have had to pay over $80-$150+ just in ethereum transaction fees. That puts an artificial floor on all bids, since otherwise you’d lose money by accepting a bid for less than the gas fees. Payment fees by credit card, which typically feel extortionary, look cheap compared to that. OpenSea could even publish a simple transparency log if people wanted a public record of transactions, offers, bids, etc to verify their accounting.

However, if they had built a platform to buy and sell images that wasn’t nominally based on crypto, I don’t think it would have taken off. Not because it isn’t distributed, because as we’ve seen so much of what’s required to make it work is already not distributed. I don’t think it would have taken off because this is a gold rush. People have made money through cryptocurrency speculation, those people are interested in spending that cryptocurrency in ways that support their investment while offering additional returns, and so that defines the setting for the market of transfer of wealth.

The people at the end of the line who are flipping NFTs do not fundamentally care about distributed trust models or payment mechanics, but they care about where the money is. So the money draws people into OpenSea, they improve the experience by building a platform that iterates on the underlying web3 protocols in web2 space, they eventually offer the ability to “mint” NFTs through OpenSea itself instead of through your own smart contract, and eventually this all opens the door for Coinbase to offer access to the validated NFT market with their own platform via your debit card. That opens the door to Coinbase managing the tokens themselves through dark pools that Coinbase holds, which helpfully eliminates the transaction fees and makes it possible to avoid having to interact with smart contracts at all. Eventually, all the web3 parts are gone, and you have a website for buying and selling JPEGS with your debit card. The project can’t start as a web2 platform because of the market dynamics, but the same market dynamics and the fundamental forces of centralization will likely drive it to end up there.

At the end of the stack, NFT artists are excited about this kind of progression because it means more speculation/investment in their art, but it also seems like if the point of web3 is to avoid the trappings of web2, we should be concerned that this is already the natural tendency for these new protocols that are supposed to offer a different future.

I think these market forces will likely continue, and in my mind the question of how long it continues is a question of whether the vast amounts of accumulated cryptocurrency are ultimately inside an engine or a leaky bucket. If the money flowing through NFTs ends up channeled back into crypto space, it could continue to accelerate forever (regardless of whether or not it’s just web2x2). If it churns out, then this will be a blip. Personally, I think enough money has been made at this point that there are enough faucets to keep it going, and this won’t just be a blip. If that’s the case, it seems worth thinking about how to avoid web3 being web2x2 (web2 but with even less privacy) with some urgency.

Creativity might not be enough

I have only dipped my toe in the waters of web3. Looking at it through the lens of these small projects, though, I can easily see why so many people find the web3 ecosystem so neat. I don’t think it’s on a trajectory to deliver us from centralized platforms, I don’t think it will fundamentally change our relationship to technology, and I think the privacy story is already below par for the internet (which is a pretty low bar!), but I also understand why nerds like me are excited to build for it. It is, at the very least, something new on the nerd level – and that creates a space for creativity/exploration that is somewhat reminiscent of early internet days. Ironically, part of that creativity probably springs from the constraints that make web3 so clunky. I’m hopeful that the creativity and exploration we’re seeing will have positive outcomes, but I’m not sure if it’s enough to prevent all the same dynamics of the internet from unfolding again.

If we do want to change our relationship to technology, I think we’d have to do it intentionally. My basic thoughts are roughly:

  1. We should accept the premise that people will not run their own servers by designing systems that can distribute trust without having to distribute infrastructure. This means architecture that anticipates and accepts the inevitable outcome of relatively centralized client/server relationships, but uses cryptography (rather than infrastructure) to distribute trust. One of the surprising things to me about web3, despite being built on “crypto,” is how little cryptography seems to be involved!
  2. We should try to reduce the burden of building software. At this point, software projects require an enormous amount of human effort. Even relatively simple apps require a group of people to sit in front of a computer for eight hours a day, every day, forever. This wasn’t always the case, and there was a time when 50 people working on a software project wasn’t considered a “small team.” As long as software requires such concerted energy and so much highly specialized human focus, I think it will have the tendency to serve the interests of the people sitting in that room every day rather than what we may consider our broader goals. I think changing our relationship to technology will probably require making software easier to create, but in my lifetime I’ve seen the opposite come to pass. Unfortunately, I think distributed systems have a tendency to exacerbate this trend by making things more complicated and more difficult, not less complicated and less difficult.

gm!

Read the whole story
yqrashawn
15 days ago
reply
Share this story
Delete
1 public comment
freeAgent
14 days ago
reply
Good perspective, but it's a bit weird to see him say that he's never been particularly drawn to "crypto" right up front, *right* after he rolled out MobileCoin integration in Signal worldwide (and he's an advisor to MobileCoin). I view that first sentence similarly to how I'd view it if Michael Jordan said he was never particularly drawn to baseball while he was playing professional baseball.
Los Angeles, CA

5 reasons to learn Clojure in 2022

1 Share

It’s a new year and while we have several exciting projects to announce, one thing that stays the same in our company in 2022 and beyond is our dedication to Clojure. 

After 8 years of usage, we have grown to become one of the largest Clojure teams in the world and we are still growing. In fact, after Nubank and Cognitect we don’t know of any larger Clojure development team. There’s plenty of reasons to continue using Clojure and ClojureScript, many of which are covered here. But you may still wonder, what makes Clojure a language of the future?

1. Enterprises keep investing in Clojure

Clojure probably won’t steal the spotlight in lists of popular languages in 2022 outside of the world of functional programming. But even though Clojure does not have the buzz of other languages, it is used by many market leaders and organisations. Some notable examples include Adobe, Apple, Facebook, Amazon, Oracle, Zalando and SoundCloud. Since the Clojure team tends to be a smaller part of a bigger development team, it often flies under the radar and goes unnoticed in the eyes of the public. The fact that business giants keep investing in Clojure is also something shown in the State of Clojure 2021 survey, where it’s reported that enterprises with over 1000 employees have a high growth of Clojure developers. The start-up world is however still as significant for Clojure as before. 

The financial sector stands out with many banks and financial services using Clojure, the most prominent example of recent times being Nubank. Nubank is expected to reinvent financial services across Latin America and currently employs over 700 Clojure developers.

2. The Clojure community is growing stronger

Clojure is popular among experienced developers, yet it is a community without hubris, as it’s known for being friendly to newbies. In the State of Clojure 2021 survey, one third of Clojure developers stated that they spend time helping new Clojure developers, which are becoming an increasingly larger part of the community. In fact, 25 percent of current developers have been using Clojure for a year or less, which is a great sign of the health of Clojure looking at 2022 and beyond. 

The Clojure community is enthusiastic and fast-growing and you will learn plenty if you decide to participate in it. Sense of community is vital with Clojure developers and for Flexianas part, we love to give back to the community. Some ways of doing so have included developing and maintaining open-source Clojure and ClojureScript libraries, supporting Clojure core development and sponsoring Clojure conferences.

3. Clojure is the right tool for the job

The heart of any development task is to convert ideas into systems. Lisp languages have always been good at this, smoothing out the friction between thought processes and code. Unfortunately Lisps have not generally been taught at anything below graduate level and the radically different approach can be jarring to someone who has already learnt a more conventional programming language. Clojure is a lisp, and is bringing the capabilities of that style of thinking and programming out of academia and into the real world.

4. Clojure is fun

After the initial learning phase, which is often quite short, most aspects of the development process become easier. Developers hit “the flow” state with minimal effort and minimal resistance from the language. 

5. Clojure has low friction

Well written Clojure is clear and understandable. Scanning it for meaning in an unfamiliar codebase tends to be easier than in other languages that have complex syntax and require more boilerplate. This does require some care and thought when creating the code, but it is possible to write systems that will be easier to maintain because of the reduced cognitive load required by the future developer. This has long been the limiting factor in software development and Clojure allows us to push that boundary a bit further out.

Conclusion

It may fly under the radar, but learning Clojure in 2022 is as promising as ever, as exemplified by the growth of users in market leading businesses and organisations as well as the significant increase of beginner Clojurians. If you want to learn a language that smooths out the friction between thought processes and code, that is clear and understandable and where you’re supported by a strong community, then Clojure is a great bet. But more importantly, if you want to invest your time in a language that is fun (something we all need a bit more of in a post-pandemic world) then look no further.

Want to know more about how we work with Clojure? Let’s have a chat.

The post 5 reasons to learn Clojure in 2022 appeared first on Flexiana.

Read the whole story
yqrashawn
17 days ago
reply
Share this story
Delete

Sacha Chua: EmacsConf backstage: picking timestamps from a waveform

1 Share

We wanted to trim the Q&A session recordings so that people don't have to listen to the transition from the main presentation or the long silence until we got around to stopping the recording.

The MPV video player didn't have a waveform view, so I couldn't just jump to the parts with sound. Audacity could show waveforms, but it didn't have an easy way to copy the timestamp. I didn't want to bother with heavyweight video-editing applications on my Lenovo X220. So the obvious answer is, of course, to make a text editor do the job. Yay Emacs!

Screenshot_20211204_013446.png

Figure 1: Select timestamps using a waveform

It's very experimental and I don't know if it'll work for anyone else. If you want to use it, you will also need mpv.el, the MPV media player, and the ffmpeg command-line tool. Here's my workflow:

  • M-x waveform-show to select the file.
  • left-click on the waveform to copy the timestamp and start playing from there
  • right-click to sample from that spot
  • left and right to adjust the position, shift-left and shift-right to take smaller steps
  • SPC to copy the current MPV position
  • j to jump to a timestamp (hh:mm:ss or seconds)
  • > to speed up, < to slow down

I finally figured out how to use SVG to embed the waveform generated by FFMPEG and animate the current MPV playback position. Whee! There's lots of room for improvement, but it's a pretty fun start.

If you're curious, you can find the code at https://github.com/sachac/waveform-el . Let me know if it actually works for you!

Read the whole story
yqrashawn
41 days ago
reply
Share this story
Delete

The Most Controversial Change In Emacs History – Random Thoughts

1 Share

Just a scant handful of decades after XEmacs introduced a mode line with proportional fonts, we’re thinking about doing the same in Emacs.

Here’s how the mode line looks (by default) in Emacs 28:

Here’s how we’re considering having it look (by default) in Emacs 29:

See? Huge difference. Huge.

The attractive thing about this change is that, well, it’s prettier, but it’s also more consistent with the other elements at the margins of the Emacs frame: The menus and the toolbar have used proportional fonts for a long time, so doing the same in the mode line might also be nice. And you can generally squeeze in more information when using proportional fonts, which is helpful if you’re jamming a lot of stuff into the mode line:

Changing this has been brought up a number of times over the years, but there’s been pushback because some of the elements in the mode line are pretty dynamic, and it’d suck if everything moved around. For instance, when displaying the column number in the mode line, it might be annoying to have the rest of the line shift to the left/right when moving the cursor around in the window.

So we’ve now added a new display spec (called ‘min-width’) today that you can slap around bits of text (and in the mode line) that ensures that the width never decreases beyond a certain point.

Perhaps that’ll make a difference in the level of resistance? I guess we’ll find out, because starting today, we’re doing a one month long test on “master”: This new mode line look is enabled by default now, and in a month we’ll evaluate based on feedback.

So give it a whirl for a few weeks, and vote on emacs-devel mailing list. (And report any glitches, of course. And suggestions for improvements are always welcome.)

Like this:

Like Loading...

Related Articles

Translations are Hard

Yesterday I was delving into the wonderful world of crowd-sourced subtitles, and I was wondering whether TV translations are easy to do. I downloaded the Emacs/mpv-based subed mode and got started. And then stopped immediately, because the mode is really geared towards editing srt files, not writing brand-new ones. You…

In "movies"

Towards a Cleaner Emacs Build

I'm planning on getting back into Emacs development after being mostly absent for a couple of years. One thing that's long annoyed me when tinkering with the Lisp bits of Emacs is the huge number of compilation warnings. The C parts of Emacs were fixed up at least a decade…

In "Emacs"

10×10%

Hey, that took only a month, which means that it's time, once again, to display some Emacs charts. And since this is the tenth post in this series, I thought I'd natter on even more than usual. And perhaps some more about... having some vague goals as being the Emacs…

In "Emacs"

Read the whole story
yqrashawn
59 days ago
reply
Share this story
Delete

Handle Chromium & Firefox sessions with org-mode

1 Share

I was big fan of Session Manager, small addon for Chrome and Chromium that will save all open tabs, assign the name to session and, when is needed, restore it.

Very useful, especially if you are like me, switching between multiple "mind sessions" during the day - research, development or maybe news reading. Or simply, you'd like to remember workflow (and tabs) you had few days ago.

After I decided to ditch all extensions from Chromium except uBlock Origin, it was time to look for alternative. My main goal was it to be browser agnostic and session links had to be stored in text file, so I can enjoy all the goodies of plain text file. What would be better for that than good old org-mode ;)

Long time ago I found this trick: Get the currently open tabs in Google Chrome via the command line and with some elisp sugar and coffee, here is the code:

(require 'cl-lib)

(defun save-chromium-session ()
  "Reads chromium current session and generate org-mode heading with items."
  (interactive)
  (save-excursion
    (let* ((cmd "strings ~/'.config/chromium/Default/Current Session' | 'grep' -E '^https?://' | sort | uniq")
           (ret (shell-command-to-string cmd)))
      (insert
       (concat
        "* "
        (format-time-string "[%Y-%m-%d %H:%M:%S]")
        "\n"
        (mapconcat 'identity
                   (cl-reduce (lambda (lst x)
                                (if (and x (not (string= "" x)))
                                    (cons (concat "  - " x) lst)
                                  lst))
                              (split-string ret "\n")
                              :initial-value (list))
                   "\n"))))))

(defun restore-chromium-session ()
  "Restore session, by openning each link in list with (browse-url).
Make sure to put cursor on date heading that contains list of urls."
  (interactive)
  (save-excursion
    (beginning-of-line)
    (when (looking-at "^\\*")
      (forward-line 1)
      (while (looking-at "^[ ]+-[ ]+\\(http.?+\\)$")
        (let* ((ln (thing-at-point 'line t))
               (ln (replace-regexp-in-string "^[ ]+-[ ]+" "" ln))
               (ln (replace-regexp-in-string "\n" "" ln)))
          (browse-url ln))
        (forward-line 1)))))

So, how does it work?

Evaluate above code, open new org-mode file and call M-x save-chromium-session. It will create something like this:

* [2019-12-04 12:14:02]
  - <a href="https://www.reddit.com/r/emacs/comments/" rel="nofollow">https://www.reddit.com/r/emacs/comments/</a>...
  - <a href="https://www.reddit.com/r/Clojure" rel="nofollow">https://www.reddit.com/r/Clojure</a>
  - <a href="https://news.ycombinator.com" rel="nofollow">https://news.ycombinator.com</a>

or whatever urls are running in Chromium instance. To restore it back, put cursor on desired date and run M-x restore-chromium-session. All tabs should be back.

Here is how I use it, with randomly generated data for the purpose of this text:



* [2019-12-01 23:15:00]...
* [2019-12-02 18:10:20]...
* [2019-12-03 19:00:12]
  - <a href="https://www.reddit.com/r/emacs/comments/" rel="nofollow">https://www.reddit.com/r/emacs/comments/</a>...
  - <a href="https://www.reddit.com/r/Clojure" rel="nofollow">https://www.reddit.com/r/Clojure</a>
  - <a href="https://news.ycombinator.com" rel="nofollow">https://news.ycombinator.com</a>

* [2019-12-04 12:14:02]
  - <a href="https://www.reddit.com/r/emacs/comments/" rel="nofollow">https://www.reddit.com/r/emacs/comments/</a>...
  - <a href="https://www.reddit.com/r/Clojure" rel="nofollow">https://www.reddit.com/r/Clojure</a>
  - <a href="https://news.ycombinator.com" rel="nofollow">https://news.ycombinator.com</a>

Note that hack for reading Chromium session isn't perfect: strings will read whatever looks like string and url from binary database and sometimes that will yield small artifacts in urls. But, you can easily edit those and keep session file lean and clean.

To actually open tabs, elisp code will use browse-url and it can be further customized to run Chromium, Firefox or any other browser with browse-url-browser-function variable. Make sure to read documentation for this variable.

Don't forget to put session file in git, mercurial or svn and enjoy the fact that you will never loose your session history again :)

If you are using Firefox (recent versions) and would like to pull session urls, here is how to do it.

First, download and compile lz4json, small tool that will decompress Mozilla lz4json format, where Firefox stores session data. Session data (at the time of writing this post) is stored in $HOME/.mozilla/firefox/<unique-name>/sessionstore-backups/recovery.jsonlz4.

If Firefox is not running, recovery.jsonlz4 will not be present, but use previous.jsonlz4 instead.

To extract urls, try this in terminal:

$ lz4jsoncat recovery.jsonlz4 | grep -oP '"(http.+?)"' | sed 's/"//g' | sort | uniq

and update save-chromium-session with:

(defun save-chromium-session ()
  "Reads chromium current session and converts it to org-mode chunk."
  (interactive)
  (save-excursion
    (let* ((path "~/.mozilla/firefox/<unique-name>/sessionstore-backups/recovery.jsonlz4")
           (cmd (concat "lz4jsoncat " path " | grep -oP '\"(http.+?)\"' | sed 's/\"//g' | sort | uniq"))
           (ret (shell-command-to-string cmd)))
...

Updating documentation strings, function name and any further refactoring is left for exercise.

Read the whole story
yqrashawn
125 days ago
reply
Share this story
Delete

Org Real

1 Share

The real:// url scheme was based on the <a href="http://" rel="nofollow">http://</a> scheme with some differences.

There is no "host" component; all components in a real URL are treated identically and are called containers. Each container can have a query string, whereas the http scheme can only have one query string at the end of a URL. And finally, spaces are allowed in component names.

real://bathroom cabinet/third shelf?rel=in/razors?rel=above/toothbrush?rel=to the left of

Real links are read from the most general to the most specific, so in this example the bathroom cabinet is the top most component and has a child third shelf with a relationship of "in". The relationship query parameter is in regard to the container immediately to the left, so this tells org-real that the third shelf is in the bathroom cabinet.

Read the whole story
yqrashawn
125 days ago
reply
Share this story
Delete
Next Page of Stories