• 5 Posts
  • 32 Comments
Joined 1 year ago
cake
Cake day: July 4th, 2023

help-circle

  • If anyone actually manages to get Plasma Bigscreen working decently, please let me know how you did it. I was really excited when I first learned about it, but after considerable time tinkering, I gave up.

    My first attempt was to install it on an old laptop. It boots up and looks good, but a large number of the built-in apps hang forever on their splash screen when you try to run them. I also couldn’t figure out how to customize what apps appear in the carousels on the homepage. I’m not sure if that’s because there truly is no way to do it or if the functionality is locked behind one of the apps that I can’t launch.

    The Plasma Bigscreen website indicates that it was designed to run on a Raspberry Pi 4, so I gave in and bought one in hopes that using the preferred hardware would work better. I followed the provided links to the latest Manjaro build of Bigscreen (which is over a year old) and installed it on my pi. Unfortunately, that build apparently suffers from a bug that prevents you from even getting past the login screen on first boot up. I don’t remember the details, but I think it was some kind of “can’t log in without setting a password” / “can’t set a password until you log in” loop. Anyway, I found a forum post discussing the problem with no solutions found, so I gave up on the manjaro build.

    My final attempt was to install an ordinary desktop Linux distribution on the pi and then use the package manager to install Plasma Bigscreen as an alternative desktop. This got me in, but there were still a bunch of broken apps. It was about this time that I also realized that the original Bigscreen concept seemed to lean heavily into voice control via Mycroft AI. Mycroft has gone through some major changes since the project launched, and I think these changes have resulted in basically all Mycroft related code in Plasma Bigscreen being broken. That may or may not be related to the other problems I had. I never got to experience a fully functional version of the software, so I have a hard time knowing what exactly is broken in what ways.

    Anyway, that’s my experience with Plasma Bigscreen. I hope this doesn’t come across as hating on the project. It should be evident from the amount of effort put in that I really wanted it to work, but in the end I had to conclude that in its current state, it’s badly broken with no sign of improvement or repairs.


  • Helpful tip: there’s a setting in Firefox to block all notification requests. It’s under Settings > Privacy & Security, then scroll down to the Permissions heading. Click the “Settings…” button next to the the Notifications entry and tick the box for “Block new requests asking to allow notifications”.

    I assume there’s an equivalent in Chrome, but I don’t know what it is off the top of my head.

    Ninja edit: Removed my attempt to hyperlink directly to the relevant Firefox settings page because it wasn’t working.


  • I don’t disagree, but Windows’ built in screen casting is hard to find and clunky to use. Linux is even worse off. Until earlier this year there was no real support from any Linux desktop environment. There’s a GNOME project that’s supposed to be putting together support. It was announced to ship with GNOME 46, but I’m not a GNOME user so I just tried to install the flatpak on my Kubuntu machine. It detects my TV but fails to connect with it. Definitely still needs work.


  • Some of that focus involves adding features that have become table-stakes in other browsers.

    Speaking of this, does anyone else feel like Firefox’s lack of ability to wirelessly screencast is a major problem when it comes to convincing others to switch away from chromium browsers? I know chromecast and airplay themselves are both proprietary, and therefore counter to firefox’s open source philosophy, but they could at least implement first party support for miracast (or DLNA?) A surprising number of smart TVs work well with those protocols. They just tend not to advertise it because most people don’t know what they are.

    I admit that I haven’t looked much into this since some years ago when I first switched over to firefox as my main browser, but at the time I found that there weren’t even any decent addons for screen casting functionality. I’ve learned to live without it, but I know a lot of people who use that functionality on a daily basis and could (quite justifiably) never be convinced to switch without an equivalent.


  • I agree. The concept is simple, and it’s not perfect, but it isn’t dumb either. This is basically recreating how coal and oil got in the ground in the first place. Plants absorbed carbon from the air as they grew, then they got buried in a way that prevented them from decomposing and re-releasing it into the atmosphere. My main question here would be whether burying it only 10 feet under ground is really enough for long term storage. The other big elephant in the room with carbon capture is that it can be a convenient excuse for companies to avoid doing work towards actually decarbonizing their operations. If, as the article suggests, this is used primarily by industries like cement making that don’t currently have a way to become carbon neutral then it’s a good thing. If it’s just used as cynical green washing by companies who could be doing better, then it’s at best a wash, and arguably a net negative.






  • Out of curiosity, what software is normally being run on your clusters? Based on my reading, it seems like some companies run clusters for business purposes. E.g. an engineering company might use it for structural analysis of their designs, or a pharmaceutical company might simulate the interactions of new drugs. I assume in those cases they’ve bought a license for some kind of high-end software that’s been specifically written to run in a distributed environment. I also found references to some software libraries that are meant to support writing programs in this environment. I assume those are used more by academics who have a very specific question they want to answer (and may not have funding for commercial software) so they write their own code that’s hyper focused on their area of study.

    Is that basically how it works, or have I misunderstood?


  • This actually came up in my research. Folding@Home is considered a “grid computer” According to Wikipedia:

    Grid computing is distinguished from … cluster computing in that grid computers have each node set to perform a different task/application. Grid computers also tend to be more heterogeneous and geographically dispersed (thus not physically coupled) than cluster computers.

    The primary performance disadvantage is that the various processors and local storage areas do not have high-speed connections. This arrangement is thus well-suited to applications in which multiple parallel computations can take place independently, without the need to communicate intermediate results between processors.



  • I’m not sure what you’d want to run in a homelab that would use even 10 machines, but it could be fun to find out.

    Oh yeah, this is absolutely a solution in search of a problem. It all started with the discovery that these old (but not ancient, most of them are intel 7th gen) computers were being auctioned off for like $20 a piece. From there I started trying to work backwards towards something I could do with them.


  • I was looking at HP mini PCs. The ones that were for sale used 7th gen i5s with a 35W TDP. They’re sold with a 65W power brick so presumably the whole system would never draw more than that. I could run a 16 node cluster flat out on a little over a kW, which is within the rating of a single residential circuit breaker. I certainly wouldn’t want to keep it running all the time, but it’s not like I’d have to get my electric system upgraded if I wanted to set one up and run it for a couple of hours as an experiment.







  • I mean, this is definitely going to be a disaster but I think the title and article here are a little misleading. The author implies that Warner Brothers is spearheading (and paying for) this venture, but I just read through the buzzword salad of a press release and it barely mentions them. The project is driven by an independent company that licensed the ready player one IP from WB. The whole thing very carefully avoids any details about money changing hands, but my guess is either that WB is getting paid, or they’ve negotiated a cut of any theoretical future profits. Of course, the chances of there ever being profits are slim to none, but I’d say at worst they’re net $0 on the deal, and at best they actually made some money by getting paid up front. They might suffer some reputation damage if it becomes a real catastrophe, but as the author of the article mentioned they are billions in debt, so its probably a risk they’re happy to take.