This reminds me of those predictions from 1900 about the year 2000, when they thought we'd all live in enormous skyscrapers and get around by flying cars. Instead we moved out to suburbs because improved logistics systems meant we could buy things from suburban shopping centres rather than having to go into city centres. Revolution, not evolution.
Surely the real advantage of an 'actually good AI' would be getting the AI to do the work itself, rather than just allowing the work to be done in a format with which the human is more comfortable. The underlying problem is that there are too many things vying for our attention.
It sounds like the author is on the same track, has the same mindset. And I like.
I am also reminded of the Young Lady's Illustrated Primer: in Neil Stephenson's Diamond Age. It is not exactly what the author describes but, if the book had a computer backend, it also divorces the user from the computer interface we have come to know. Perhaps for me some future (better) local LLM within such a book is what I want. A kind of companion I ask questions of…
(I mean I suppose I should just do what was posted a day or to ago to the Ask HN: and put a local LLM behind a messaging app and I could just converse with it wherever I am. Tangent: I am kind of fascinated by the idea of a personal LLM that has context stretching back to my earliest days—were I to have started conversing with this synthetic companion at a young age. Imagine the lifetime of context where the LLM knows my habits, how I've changed over the years. I suppose this is nightmare fuel for a number of you.)
There are basically three versions of the book:
1) The ones developed for a few rich kids. These are partially automated, but backed by gig workers. They get what we might call (if you'll pardon the term) "Actually Indians" AI (augmented by the regular type).
2) The one our protagonist gets. This is one of the books from #1, but the distinctive feature here is that an early gig worker (the book calls these "'ractors" when they're doing this kind of work) the protagonist draws takes a special interest in her and intentionally keeps drawing jobs for her over a period of several years. This continuity and personal care by a single real person is what sets it apart and makes her experience so excellent.
3) The mass-market version that's entirely computerized, no human touch. This version brainwashes a fuckload of kids into becoming the "mouse army", and that's really all we see as far as what it can do: something really bad (if convenient for our protagonist).
The message of the book is 100% the opposite of "automated learning-books are amazing". It's "tech for learning sucks ass and/or is outright dangerous if you rely only on it, and a real human tutor who cares about a kid is the best thing around even in a crazy high-tech future-world".
What's the point? LLMs tend towards the mean/average --- I want better in my life and interactions --- it's useful when I need an example DXF or similar rote task, but my current project is a woodworking joint which has no precedent.
Yes, the skeumorphism angle is an interesting one, and one which is surprisingly absent in the _ur_ description of a stylus equipped computing device, the slates/tablets from Larry Niven and Jerry Pournelle's _The Mote in God's Eye_ --- this sort of thing seems to be coming back around --- a recent Kindle Scribe firmware update add shape recognition. I'd be _very_ pleased if my new Kindle Scribe Coloursoft could fully become a replacement for my Newton....
Regardless, I have still found them useful. Diagnosing the problems with a car is maybe an esoteric example but is still useful.
For many months now I have been working through learning about and implementing a hobbyist analog computer with LLM as engineer-confidant. I already knew the basics of op-amps and analog computing but was surprised at a lot of the new things I discovered only by way of the LLM saying (for example), "Hey, here's a nice way to get your reference voltages…" and the project benefited from it (and I learned about a new chip/device/technique).
Because it was a profit making venture for car companies. Suburbs are horrifically inefficient, they survive by the twisted "communism" of cannibalizing the dense urban tax bases to support the sprawling, expensive to service and maintain, isolating flatlands.
It was only later that the almighty combustion engine and tire companies forcibly replaced streetcars with buses and trucks, that cars began their hegemonic domination of suburbia. The National Highway System decrees didn't hurt, either, but highways were built in the USA with an ulterior motive of national defense.
Meanwhile, traffic and the stigma around drunk driving (which wasn’t nearly as strong or strictly enforced before the 90s), have quickly taken much of the bloom off the rose of car-dependent lifestyles. I predict the growth of micromobility options will continue to make cities even more attractive as well by improving coverage for areas where transit can’t go and generally improve the throughput of city streets and reduce the space needed for parking cars for people who live within “not-quite walking but feels silly to drive” distance.
The big gap in the US at least is simply a lack of cities! Everything is still concentrated in a handful of legacy urban centers that survived the waves of “urban renewal” and it’s simply too expensive to house all the people who want to live there without turning them into Hong Kong sized megalopolises, which starts to introduce new problems from overwhelming density. “Urban” development patterns need to expand out to more of the country to take demand pressure off the 5 or 6 American cities with decent mass transit.
[0] https://www.youtube.com/watch?v=7wa3nm0qcfM [1] https://dynamicland.org/
They just want OpenClaw with printing and scanning privileges. Every morning OpenClaw prints out a task list or items that need action, the author writes notes/responses, and places it on the scanner. This is basically how my program director worked at my last job. Every morning the secretary would have his schedule printed out, he'd go to meetings and write notes, and would pass by his secretary and stick a note or two on her desk saying "set up a meeting with XYZ org/team within the next few days on ABC topic." The secretary would also print documents/presentations and he'd mark them up throughout the day with changes he wanted made, and he'd drop the documents off when he was done going through them, and the secretary would distribute the documents to their respective POCs to make the changes.
Basically the only thing the author hasn't mentioned that the secretary did is that the secretary also acted as a gatekeeper for access to the program director, either in real-time ("no, you can't go in, they are meeting with a higher level director") or would take a request for a meeting and have enough personal context on whether the director would want the meeting themself or want to see it go through a division chief first. Not sure if OpenClaw can do that, but just about everything else is totally do-able. Not sure if I really want to see someone wasting this much paper just to "feel analog" but I suppose it probably isn't a big deal since most people won't do it this way, and will stick to digital forms of communication with their OpenClaw secretary.
UNIX Principle anyone ? Do one thing, and do it well - seems like in this 'age of AI' the industry is rediscovering by detour best practices, decades old, all over again.
But otherwise having 'interfaces' printed out to you and an LLM multi-modal later working from your notes on it sounds really interesting and less stressful than modern 'computing'.
The Office's Michael Scott would be proud - Paper may just be the future of Digital after all!
Human picks up all the sheets out of the printer, writes out replies with pen
Human puts the stack of answered email sheets in a multi-page scanner
Scanner physically scans them, agent transcribes them and matches them back to the incoming emails via the unique ID on each sheet, sends replies
You could adjust this flow for anything where human input is just one part of a larger sequence: just add print -> write -> scan into your flow where you'd normally have a human type. It's kind of a rebirth of faxing
When I showed her the reply button in Eudora (this was in 2001), she was so happy that she bought me a cake.
She struggled with IT but was tack sharp otherwise. So far she's the only boss I've ever really liked.
Before everyone just started using Docusign anyway, I'd bought houses with a phone "scanner". LOL.
I don't think I started with it, but for a very long time I've had an app called TinyScanner that's good-enough at edge detection, can de-noise or make a document entirely black & white, and can glue multiple pages together into a PDF. The results look better than plenty of flatbed scanner results I've seen, if not as good as the best of those.
https://x.com/daviddorg/status/2037050583274954882
Ditto with Forth dumping the memory, creating literal structures for numbers and whatnot. Also, the 'see' command among dumping literal memory bytes.
Being both a REPL helps a lot. But Forth gets into a lower level than S9 itself.
I question the idea of pastoralism though, I would argue this is another kind of construct. Laurel Hatcher Ulrich’s ‘age of homespun’ talks about this in detail, and how handcraft revivals were an expression of fear or anxiety about the radical changes brought about by industrialisation, and became a sort of myth making device for the rejection of technological overlords.
In any case, Paper Computer charts neat reformulation of the personal computer into something more interesting. If all individual computing tasks become distributed back into real spaces, objects and physically manipulable media it becomes more of an interpersonal computer, and distributed computing power can be pushed to things that don’t ordinarily engage with computational tasks such as wind or plants or anything within the shared working environment.
Using paper and space to organize ideas is nice, but that's a niche use-case. And in any case, you'll have to digitalize it anyway afterwards, so better start on the digital version immediately, and be good at it. Everytime I start a new project, I'm tempted to take a pencil and paper, but then I refrain and use draw.io or the like because I know it will be winning on the longer run.
For the rest, you can easily customize your phone / browser / anything to be less distracting.
As for using AI just for convenience, this looks like very expensive in terms of resource.
This holds even with really-nice drawing interfaces like ProCreate on a 13" iPad. Paper's still better for some things. Outside of work, the way I make maps (of just about any zoom-level) for RPGs I run is to sketch them on paper, take a photo of that and import it to pro-create, trace the lines there (in a new layer), and add color/texture. I get way better results faster, and am way less frustrated, than if I start with a blank "sheet" on the iPad. The paper sitting fully flat on my table, being able to easily and precisely turn it this way and that, erasing or smudging out or just X-ing elements I mess up, plus just messing up way less to begin with, all that adds up to real paper being a way better UI for an initial draft-sketch, for me.
Not that I haven't done exactly the same thing as you, I never keep paper around and my handwriting has gotten terrible. I'm saying this to myself and others as well.
Just the other day, I noticed my thinking was so hijacked by distractions while building something (with AI help) that I started writing in a notebook to stay on track. The last time I'd written in the notebook was 3 years ago; in this case writing stuff down in it really helped to get me unstuck.
I'm excited to imagine workflows that could make computing a more physical activity. Thanks for writing and sharing this.
(My blog post btw if you’re curious https://bhave.sh/make-humans-analog-again/)
http://www.43folders.com/2004/09/03/introducing-the-hipster-...
That said, I do much prefer reading on paper, or at least on e-ink, for many of the same reasons outlined in the post. Computers and phones are just too distracting, and too dynamic.
And I'd love some way to write down shopping lists or appointments, and have them available wherever, without having to pull out the phone. Our current method is a whiteboard + a photo whenever we need it, which doesn't quite cut it.
The only compromise would be a limited area like a physical desktop that had affordances like an overhead camera and some form of paper output.
It’s fantastic that computers can be so effective at this read-only work but so much of what I do needs write feedback from the machine.
I see this seemingly everywhere. People are looking for these extreme solutions to solve the problem of getting distracted by an app like Instagram or TikTok on their phone. Wouldn’t uninstalling the app, and going a step further, deleting the account, be the more pragmatic solution here? We control what is installed on our devices, what accounts we have, and which notifications we receive. If someone has enough agency to move to a pen and paper, surely they can uninstall some apps?
While I like the idea of having a magic paper notebook that would somehow interact with computer systems, that idea seems like mostly science fiction without having significant levels of technology all around you (cameras, projectors, etc) which would kind of defeat the purpose imo.
I watched the first video on Dynamic Land and I think I’d feel very uncomfortable in a room like that. Look the wrong way and catch a projector’s light in the eye, and once big tech gets into the game, who knows what happens with all the data from the cameras. I’ve grown rather paranoid.
A phone with just utilities installed, no social media, or going a step further to something like an e-ink tablet (something like Remarkable), seems like it would get most of the way there and actually work today. The biggest concern then becomes the web browser, but the big tech companies do most of the work for us by making sites insufferable to use while logged out and without an app.
Something might be able to get rigged up with RocketBook as well, for an actual pen on paper experience, but having to take a picture of the pages is kind of a pain. I have one and the novelty wore off very quickly; it has sat in a drawer for years now.
I’ve struggled with this idea a bit myself, as I sometimes romanticize the idea of using analog tools, but when they exist alone on an island, that seems to come with some considerable downsides in the modern world.
Apple Notes can be good for some of this too. Instead of using ChatGPT, Apple Notes can use the phone camera to do live OCR on text and add it into a note. I’ve used it a couple times and it’s pretty handy, when I remember it.
(On HN 2017, 138 comments: https://news.ycombinator.com/item?id=15960056)
Emacs, and technologies built on it, such as org-mode, come somewhat close to ideas expressed here by having plain text in a buffer be the unifying data format. You can organize stuff by just moving snippets of text around.
I think it's difficult in practice to design data manipulation interfaces based on real-world objects because atoms are heavy and bits are not. Data is just much more malleable and transformable than real world objects, at least at the pre-Diamond Age tech level we're at. But maybe ML will help make this easier by allowing computers to track and scan the objects more easily.
Although the cardboard implementation is kind of the point, I think it's cool that someone made an FPGA version (dead link though, RIP drdobbs.com).
Just a simple:
> Folk Computer is a research & art project centered around designing new physical computing interfaces.
From ./notes/tableshots.txt with a link towards the top would imo be quite helpful.
(Sorry, this is just one of my pet peeves: needing to know what a project is about before being able to read about it is just terrible UX, although extremely common as we as humans tend to forget that we know things others don't)
> Hello, Folk Computer is a research & art project centered around designing new physical computing interfaces. [read more](./notes/tableshots.txt)
Is more than sufficient, most of the website is for people who already know about the project. I'm just asking for a small part at the beginning for us who are new :)
Also, check the spirograph too, among the slide ruler and any abacus.
It's essentially a poor man's hacked up DynamicLand - projector, camera, live agent. There are so many things you could do if you had a strong working baseline for this. My kids used it to create stories, learn how to draw various things, and watching safe videos they could hold in their hand.
There's something weirdly compelling and delightfully physical about holding a piece of paper that shows a live rocket launch, with the flames streaming down the page. It could also project targeted pieces of text, such as inline homework advice, or graphs next to data. It doesn't take long to imagine any other number of fun use cases, and it feels a lot more freeing and inspiring than keeping everything bound to a screen.
Github - https://github.com/Pugio/Orly (hacky minimal prototype that did the thing)
Video Pitch - https://youtu.be/-9l1x7GnmxU (filmed an hour before the deadline on an old phone with no sleep)
https://www.theverge.com/2022/10/20/23415167/amazon-glow-sup...
If you don't mind me asking, what hardware did you use? Especially for the project, I'm guessing it needs to have quite a strong bulb in order to be seen in broad daylight?
The Folk Computer people have some incredible work they've been doing too, that's definitely worth looking at for anyone interested. Their intergation of a novel display technology is really sweet too, allowing for good visibility in a variety of conditions, which I love. https://folkcomputer.substack.com/ https://folk.computer/ https://news.ycombinator.com/item?id=39241472 (165 points, 2 years ago, 53 comments)
The Folk Computer people have some incredible work they've been doing too, that's definitely worth looking at for anyone interested. Their intergation of a novel display technology is really sweet too, allowing for good visibility in a variety of conditions, which I love. https://folkcomputer.substack.com/ https://folk.computer/
I asked at some point if I could theoretically develop an application that could literally be controlled by a Fischer Price toy, like a little plastic car console or something. Or even potentially have a real keyboard that isn’t connected to anything, but the VisionPro can just see my keypresses and apply them as if I was actually pressing something. The former case is possible, but surprisingly difficult, but the latter case isn’t really there yet (requires too much precision and latency is worse than just using a Bluetooth keyboard).
Either way, the idea of a computing environment that meshes with and directly interacts with the real, physical objects around you is an interesting premise I’d like to see taken further with “Spatial Computing”/AR. Scanning and recording things I’m writing on a whiteboard or in a notebook by recognizing that I’ve picked up a pen and am writing something down would just be getting started.
Of course, if we’re ambiently recording everything you’re doing there will need to be some kind of regular process/interface to “sift” everything at the end of the day. This is the core of the Getting Things Done methodology. Everything goes into a big “intake list” and then you do periodic check-ins throughout the day where you review the list and decide whether to move those to a series of sub-lists to “do this now,” “do this soon,” or “do this someday.”
Edit: you've unfortunately been breaking the site guidelines badly and frequently. Examples (among many others):
https://news.ycombinator.com/item?id=47706755
https://news.ycombinator.com/item?id=47603599
https://news.ycombinator.com/item?id=47476320
https://news.ycombinator.com/item?id=47068759
If you keep this up, we're going to have to ban you. I don't want to ban you, so if you wouldn't mind reviewing https://news.ycombinator.com/newsguidelines.html and taking the intended spirit of the site more to heart, we'd be grateful.