• 0 Posts
  • 239 Comments
Joined 2 years ago
cake
Cake day: January 17th, 2022

help-circle


  • utopiah@lemmy.mltoLinux@lemmy.mlLauncher for Everything*
    link
    fedilink
    arrow-up
    2
    arrow-down
    1
    ·
    2 days ago

    Superficial feedback but I can’t read more than 3 lines without syntax highlighting. Here I believe lines short for the text but makes code even harder to read due to new line. Maybe Codeberg allows for HTML embedding.

    Now for a comment on the content itself, how is that different from aliases in ~/.bashrc? I personally have a bunch of commands that are basically wrapped or shortcuts around existing ones with my default parameters.

    Finally, if the result is visual, like dmenu which I only use a bit in the PinePhone, then please start by sharing a screenshot of the result.

    Anyway, thanks for sharing, always exciting to learn from others how they make THEIR systems theirs!


  • IMHO the question isn’t as much you as a user of such platforms is “f*cked” because you sound both mindful and technically savvy. So, on that front, you will be OK.

    The harder question I would say is how morally bankrupt you will feel by contributing to worsening the privacy of others for profit. Namely that yes by using Facebook/Insta/TikTok/etc you will gain more customers but those customers are gradually losing their privacy while you make those companies bigger by paying them. That means you depend on those companies more while they get more power.

    Because of that I would argue that sure, do everything you can to protect yourself but it can’t stop there. I would argue then than the question is rather, where else can you find more clients, and maybe even “better” clients who are more aligned with your own views on privacy, and maybe even more. It’s definitely a challenge, especially seeing the trend of surveillance capitalism, but as you acknowledge yourself by using Lemmy, there are actual alternatives.




  • utopiah@lemmy.mltoLinux@lemmy.mlDeduplication tool
    link
    fedilink
    arrow-up
    1
    ·
    edit-2
    3 days ago

    FWIW just did a quick test with rmlint and I would definitely not trust an automated tool to remove on my filesystem, as a user. If it’s for a proper data filesystem, basically a database, sure, but otherwise there are plenty of legitimate duplication, e.g ./node_modules, so the risk of breaking things is relatively high. IMHO it’s better to learn why there are duplicates on case by case basis but again I don’t know your specific use case so maybe it’d fit.

    PS: I imagine it’d be good for a content library, e.g ebooks, ROMs, movies, etc.




  • utopiah@lemmy.mltoLinux@lemmy.mlDeduplication tool
    link
    fedilink
    arrow-up
    1
    arrow-down
    1
    ·
    3 days ago

    I don’t actually know but I bet that’s relatively costly so I would at least try to be mindful of efficiency, e.g

    • use find to start only with large files, e.g > 1Gb (depends on your own threshold)
    • look for a “cheap” way to find duplicates, e.g exact same size (far from perfect yet I bet is sufficient is most cases)

    then after trying a couple of times

    • find a “better” way to avoid duplicates, e.g SHA1 (quite expensive)
    • lower the threshold to include more files, e.g >.1Gb

    and possibly heuristics e.g

    • directories where all filenames are identical, maybe based on locate/updatedb that is most likely already indexing your entire filesystems

    Why do I suggest all this rather than a tool? Because I be a lot of decisions have to be manually made.


  • utopiah@lemmy.mltoTechnology@lemmy.mlBig Tech AI Is A Lie
    link
    fedilink
    arrow-up
    3
    arrow-down
    1
    ·
    edit-2
    4 days ago

    Well that’s one position, another is to say AI, being developed currently, is :

    • not working due to hallucinations
    • wasteful in terms of resources
    • creates problematic behaviors in terms of privacy
    • creates more inequality

    and other problems and is thus in most cases (say outside of e.g numerical optimization as already done at e.g DoE, so in the “traditional” sense of AI, not the LLM craze) better be entirely ignored.

    Edit : what I mean is that the argument of inevitability itself is dangerous, often abused.



  • utopiah@lemmy.mltoTechnology@lemmy.mlBig Tech AI Is A Lie
    link
    fedilink
    arrow-up
    1
    arrow-down
    3
    ·
    5 days ago

    Interesting video and glad to see open-source suggested as a potential solution at the end… yet, it does not solve hallucinations (for LLMs), energy consumption (any form of AI) or… the fact that the hype itself is an economical and political tool at the service of a few. On the final point on regulators, I believe it’s damaging to imply that regulators are ignorant. They are not technical, indeed, but they are not supposed to. Regulators didn’t need to know how to build a plane to dictate rules that would improve safety in the industry, same for not being engineers in order to make the seatbelt mandatory. Yet, they do learn from technical experts, e.g in Europe the JRC that informs the Europeen Commission, Parliament, etc.