That sounds like Xiaomi. The best price to performance ratio of any OEM, but at the cost of terrible software and this… experience… when you want to get rid of it.
Worth noting that not all OEMs are like this.
That sounds like Xiaomi. The best price to performance ratio of any OEM, but at the cost of terrible software and this… experience… when you want to get rid of it.
Worth noting that not all OEMs are like this.
That’s a reasonable per-core size, and it doesn’t make much sense to add all the cores up if your goal is to fit your data within L2 (like in the article)
Please don’t pretend as if OpenSource Devs don’t constantly complain about pesky PRs😅
<i>I</i>'ve <u>seen</u> much <b><u>more</u> complaints</b> about <a href=“https://0.0.0.0/random_img.tiff”>people</a> constantly <marquee>demanding</marquee> their specific <h1>annoyances</h1> to be fixed without ever <i>submitting <u>a single <b>line of code</b></u></i>. <i>Maintainers</i> are pretty much <b>universally</b> welcoming to code <h2>contributions</h2> <br><br><br><br><br><br>
I soooo hope this does something funky with someone’s Lemmy client
Maybe the management hasn’t decided on the exact promises they’re willing to make? Also there’s two years left before it becomes important, while previously there was always a generation going out of support within a year.
That’s more of a storage thing, RAM does a lot smaller transfers - for example a DDR5 memory has two independent 32bit (4 byte) channels with a minimum of 16 transfers in a single “operation”, so it does 64 bytes at once (or more). And CPUs don’t waste memory bandwidth than transferring more than absolutely necessary, as memory is often the bottleneck even without writing full pages.
The page size is relevant for memory protection (where the CPU will stop the program execution and give control back to the operating system if said program tries to do something it’s not allowed to do with the memory) and virtual memory (which is part of the same thing, but they are two theoretically independent concepts). The operating system needs to make a table describing what memory the program has what kind of access to, and with bigger pages the table can be much smaller (at the cost of wasting space if the program needs only a little bit of memory of a given kind).
There’s no inherent guarantee that a router has a firewall configured properly, or has it enabled.
If it’s not an enterprise router (where you sometimes start with a blank configuration), it most definitely does have a firewall blocking incoming traffic by default.
In the deployments you’re seeing, are ISPs handing out /120 blocks to each router?
/120 is not enough for IPv6 to reasonably work. It kinda requires the smallest block to be /64, otherwise half the cool stuff about IPv6 breaks. So you should get something between /48 and /64 (the recommendation for ISPs is /56 for residential users so they can subdivide their network to 256 other networks, and /48 as default commercial allocation).
Does that require the ISP to have access to alter your home router, or do customers configure the DHCP themselves (which seems unlikely to scale)?
There is DHCPv6, but it’s not really an important part of a network like DHCP for v4 networks. IIRC Android doesn’t even support it. IPv6 uses Router Advertisement (RA) to tell devices what prefix they’re in (and a few things that were originally DHCP options, like the preferred DNS servers), and the devices then pick their own address using the SLAAC mechanism (originally it was derived from the MAC address, but nowadays should be a random number). RA supports “multilayer” networks where each following router further subdivides the prefix it got.
If you want a static address (for example for a server), you can either configure it manually on the device (using tokenized addresses, i.e. “static local part with dynamic prefix”), or use a DHCPv6 server to assign the address (in which case the RA responses from your router need to indicate that there is a DHCPv6 server on the network).
Also, you talked about the fc00::/7 (or its locally managed half, fd00::/8) prefix as a proof that NAT is used with IPv6, but… There’s absolutely nothing stopping you from having both a globally routable address and a local only address at the same time. IPv6 already requires you to have at least two addresses when you connect to any network - a link local address and whatever other address you get assigned (btw IPv4 never prevented you from doing the same thing, it just wasn’t directly encouraged and wasn’t widely used, and DHCP didn’t support handing out multiple addresses unlike RA).
You can even get a security “improvement” over the claimed scenario with NAT with this - if you don’t assign a global address to a node, then not only will it be unreachable from the internet, it will also be unable to connect to the internet itself while being reachable from your network without any issues. “Air gapping” (I know, I know… but people use this term for “no internet” now) for folks afraid of firewalls!
Or when you have the audacity to take a picture with it
I would hope it’s a special, heavy-duty kind at least.
I’ve seen an expensive microwave with a capacitive touch panel right above the door (and the door was the classic oven style, so attached by the bottom edge). If you ever had a phone with crappy moisture detection, you know where this is going.
You put your food in the microwave. Turn it on and let it heat the food up. Open the door, take the food out and close the door again. Congratulations, your microwave has probably just turned itself back on, because it detected the humid hot air rising from the briefly opened door as you touching the screen. And because most of the touch screen is “touchable”, there’s a pretty good chance this gust of humid air can successfully pick a cooking/heating mode and confirm it.
The microwave randomly navigating its own touch screen happened pretty much every time, passing all the menus and turning on was successful about 10% of the time.
In short, I wouldn’t expect a microwave interface to have any thought put into it.
This is referring to the recent news about Google gaining huge market share. This new drop simply means there was no dramatic change and last month’s data was flawed.
Even Linux is slowly moving to an immutable system like Android. It is simply the best approach for an OS that non-technically-inclined people use - it’s much harder to screw up beyond repair by accident - and clearly the future of operating systems (well, future for Linux at least, mobile platforms and maybe macOS are already there).
My two cents: the only time I had an issue with Btrfs, it refused to mount without using a FS repair tool (and was fine afterwards, and I knew which files needed to be checked for possible corruption). When I had an issue with ext4, I didn’t know about it until I tried to access an old file and it was 0 bytes - a completely silent corruption I found out probably months after it actually happened.
Both filesystems failed, but one at least notified me about it, while the second just “pretended” everything was fine while it ate my data.
Don’t be ridiculous - this is a lab environment, they can faithfully recreate the suffering as long as the ethics committee doesn’t get notified.