Infosec Press

Reader

Read the latest posts from Infosec Press.

from Kevin Neely's Security Notes

A resume workflow from neurond.com Image: a typical resume content extraction workflow from neurond.com

I used to keep my résumé (from here, “resume”) very up-to-date. For a long time, I had a resume crafted in #LaTeX because I have a long history with using that typesetting and markup language for purposes other than the ones most people think of, e.g. I wrote my college English papers in it, I had a slew of templates I created while I was a practicing attorney that would create letters, motions, and envelopes from source .tex files, etc. Keeping content in text makes it more portable across platforms and applications, and the nature of Microsoft Word is that you need to fully re-create the resume every couple years because some invisible formatting munges the entire document.

TL;DR I ended up using RenderCV as mentioned below in the [[Resume Workflow#RenderCV|RenderCV section]].

In the time since I last relied upon a resume, the method of applying for jobs –and more importantly, how recruiters review submissions– has changed pretty drastically. And despite all the great advances in technology over the past ten years, apparently, HR systems still are not that great at parsing a PDF or Word doc into text that can be machine-read by whatever algorithms and/or AI they’re using to perform the first pass. Because of this, you want to make sure to submit a machine-friendly description of your experience. There really should be a standard for all this stuff that makes it easy on both the applicant and the hiring manager. Like, I don’t know, some sort of HR standards body or something. A standard has never emerged, and I suspect that LinkedIn has a lot to do with that.

Additionally, having an easy way to keep one’s resume in sync and in multiple formats means that it can be quickly used for many purposes, from printing an attractive hard copy to piping it through some [[Fabric]] AI workflows. So this set me on a fairly long hunt for a system where I could write once, and generate in multiple formats.

The search for a resume workflow

First round

LaTeX & Pandoc

Since my resume was already in LaTeX, using the 20 second CV set of template –which I think is very nice– I went and updated that and then ran it through pandoc, which is a multi-format document converter. The results ended up being pretty poor and not useful. The PDF looked great, obviously, but pandoc did not understand the LaTeX very well and the Markdown required a lot of edits.

We want everything to look good upon compilation/export/save as/whatever, so this was not an option.

Interlude

I had kind of given up at this point, figuring I either needed to just go Google Docs or maintain a Markdown version and attempt to keep them in sync. Then, I came across a post about an auto-application bot and the author had a related project that used resume information formatted as YAML to create a specific resume based upon job description or LinkedIn post.

Resume from Job Description

This project is called resume render from job description (no cute animal names or obtuse references in this project!), and I gave it a try, but it appeared to require all the fields, including e.g. GPA. I don’t know about you, but I'm way past the point in my career where I'm putting my GPA on a resume, so it wasn’t that useful.

It was late on a Thursday night, so obviously it was time to look a bit further into the rabbit hole

Online options

I found a number of projects that were a service model where they host and render the resume for you. These included resume.lol (I question the naming choice here), Reactive resume (opensource, excellent domain name, and it has nice documentation), and WTF resume (my thought exactly!).

These all came from a post of 14 Open-source Free Resume Builder and CV Generator Apps.

JSONResume

As I traveled further down the Internet search rabbit hole, I came across JSON Resume, an #opensource project with a hosting component where people craft their resumes in JSON and it can then render in a number of formats either via a command-line tool or within their hosted service, making it a kind of hybrid option.

At this point, I felt like I was almost there, but it wasn’t exactly what I wanted. JSONResume is very focused around being part of their ecosystem and publishing within their hosting ecosystem. The original #CLI tool is no longer maintained, and a new one is being worked on, which appears minimal but sufficient for the task. A nice thing is that they have some add-ons and have created a sort of ecosystem of tools. Looking over the project’s 10 year history, those tools have a tendency to come and go, but such is the nature of OSS.

The Award for “Project Most Suited to My Workflow” goes to….

Another great thing about JSON Resume is that they, i.e. Thomas Davis, have done a fantastic job of cataloging various resume systems out there in their JSON Resume projects section. There is so much interesting stuff here –and a lot of duplicative effort ahem see the “HR Standards” comment above– that you can spend a couple days looking for the project that best fits your needs. For me, I landed on RenderCV, which is not only in the bibliography, but also mentioned on the Getting Started page because there are tools to leverage JSON Resume from RenderCV!

So without further ado…

RenderCV

While RenderCV is a part of the JSON Resume ecosystem, in that people have created scripts to convert from the latter to the former, it is a completely separate and standalone project. Written in #python and installable via pip. RenderCV’s approach is to leverage a YAML file, and from that generate consistent resumes in PDF, HTTML, Markdown, and even individual PNG files, allowing the applicant to meet whatever arcane requirements the prospective employer has.

graph LR

	YAML --> TeX & Markdown 
	TeX --> PDF & HTML & PNG

Resume generation workflow

Using RenderCV

Getting started with RenderCV is like pretty much any other project built in python

  1. Create a virtual environment using venv or conda, e.g. conda create -n renderCV python=3.12.4
  2. Install via pip with a simple command pip install rendercv
  3. Follow the quick start guide and create a YAML file with your information in it
  4. Run rendercv render <my_cv>.yaml
  5. View the lovely rendered résumé

Extending RenderCV

This was great, as I now have a very easy-to-edit source document for my résumé and can quickly create others. I’m hoping Sina, the author, makes the framework a bit more extensible in the future because the current templates are oriented toward people with STEM backgrounds looking for individual contributor roles. However, as some of us move further in our careers, the résumé should be less about skills and projects, but more about responsibilities and accomplishments as we lead teams. I have enhanced the “classic” and “sb2nov” themes so that they take these keywords as subsections to a specific company/role combination under the professional_experience section.

Theme update for Leaders and Managers

I created a fork which contains updates to v1.14, adding the “Responsibilities” and “Accomplishments” subsections for company: under the Experience section.
This allows leaders to craft their resume or CV in such a way that it highlights the breadth of their influence and impact to the organization.

The following themes support the additional subsections: – markdown – classic – sb2nov

A non-updated theme will simply ignore the content under these subsections; omitting these sections will make the resume look like the original theme. Hopefully the framework will be more extensible in the future and I can add this as a pull request.
In the meantime, the forked repo at https://github.com/ktneely/rendercv4leaders should work on its own, or the /ExperienceEntry.j2.tex and /ExperienceEntry.j2.md files from those themes can simply be copied over the existing.

How to use

Usage is extremely straightforward, as this merely extends the framework with a couple new keywords for the Experience section and looking for a preceding company declaration. Here is an example:

professional_experience:
  - company: NASA
	position: Director of Flight Operations
	location: Houston, TX
	start_date: 1957-03
	end_date: 1964-06
	responsibilities:
	  - Manage the Control room.
	  - Write performance reports.
	  - Smoke copious amounts of cigarettes
	accomplishments:
	  - 100% staff retention over the course of 9 rocket launches.
	  - Mobilized and orchestrated multiple teams to rescue astronauts trapped in space.
	  - Lung cancer.

This will then render “responsibilities” and “accomplishments” as italicized sections under the job role, highlighting what a difference made while performing in that role.

Maintaining Multiple Versions

This is basically what it all comes down to: the ability to maintain different versions for your target companies. While some work is being done to modularize the source content, it is not yet to the point where each section of the resume is a building block that can be invoked at compile time. What I do is maintain different YAML files and use the parameters in the rendercv_settings section to direct the output to different, meaningfully-named directories while maintaining a generic name for the file itself.

So, instead of “Kevin-LargeCorprole.pdf”, “Kevin-Startuprole.pdf”, etc., I simply send “Kevin-CV.pdf”. This way, it’s not incredibly obvious to the reviewer that I have specially-crafted a resume for that job, it just happens to look like I have exactly what they’re looking for in my default resume.

Automation

Want to automate the build of your resume whenever you update the source file(s)? Look no further than rendercv pipeline to generate the output whenever you commit source to GitHub.

Also, since version 1.15, the --watch flag will watch the source file locally and re-compile every time you save the source YAML file.

References and further exploration

  1. Neurond.com blog post: What is a CV/Resume Parser and How Does it Work?, Trinh Nguyen, Aug 16, 2022.
  2. TeXMaker: an Open-source TeX editor
  3. RenderCV user guide
 
Read more...

from csantosb

img
Remote #ci is the way to go in #modernhw digital design testing. In this #ciseries, let’s see it in practice with some detail using two of the most popular forges out there.

Gitlab

The gitlab #gitforge includes tones of features. Among these, a facility called the container registry, which stores per project container images. Guix pack allows the creation of custom #reproductible environments as images. In particular, it is possible to create a docker image out of our manifest and channels files with

guix time-machine -C channels.scm -- pack --compression=xz --save-provenance -f docker -m manifest.scm

Check the documentation for options.
Remember that there are obviously alternative methods to produce docker images. The point on using guix resides on its reproducibility capabilities: you’ll be able to create a new, identical docker image, out of the manifest and channels files at any point in time. Even more: you’ll have the capacity to retrieve your manifest file out of the binary image in case your manifest file gets lost.
Then, this image must be loaded into the local docker store with

docker load < IMAGE

and renamed to something meaningful

docker tag IMAGE:latest gitlab-registry.whatever.fr/domain/group/NAME:TAG

go remote

img
Finally, pushed to the remote container registry of your project with

docker push gitlab-registry.whatever.fr/domain/group/NAME:TAG

At this point, you have an environment where you’ll run your tests using gitlab's ci features. You’ll set up your gitlab’s runners and manifest files to use this container to execute your jobs.
As an alternative, you could use a ssh executor running on your own fast and powerful hardware resources (dedicated machine, shared cluster, etc.). In this case, you’d rather produce an apptainer container image with:

guix time-machine -C channels.scm -- pack -f squashfs ...

scp this container file to your computing resources and call it from the #gitlab runner.

Github

The github is probably the most popular #gitforge out there. It follows a similar to #gitlab in its conception (pull requests and merge requests, you catch the idea ?). It also includes a container registry, and the set of features if offers may be exchanged with ease with any other #gitforge following the same paradigm. No need to go into more details.
There is a couple of interesting tips about using #github, though. It happens more usually than not that users encounter frequently problems of #reproducibility when using container images hosted on ghcr.io, the hosting service for user images. These images are usually employed for running #ci testing pipelines, and they usually break as upstream changes happen: updates, image definition changes, image packages upgrades, etc. If you read my dependencies hell post, this should ring a bell.
What can be done about in what concerns #modernhw ? Well, we have #guix. Let’s try a differente approach: building an image locally, and pushing it to #github registry. Let’s see how.

in practice

An example repository shows tha way to proceed. Its contents allow to create a docker container image to be hosted remotely. It includes all that’s necessary to perform remote #ci testing of a #modernhw #vhdl design.

docker pull ghcr.io/csantosb/hdl
docker images # check $ID
docker run -ti $ID bash

It includes a couple of #plaintext files to produce a #deterministic container. First, the channels.scm file with the list of guix chanels to use to pull packages from. Then, a manifest.scm, with the list of packages to be install within the container.
The image container may be build with

image=$(guix time-machine --channels=channels.scm -- \
             pack -f docker \
             -S /bin=bin \
             --save-provenance \
             -m manifest.scm)

At this point, it is to be load to the docker store with

docker load < $image
# docker images

Now it is time to tag the image

docker tag IMID ghcr.io/USER/REPO:RELEASE

and login to ghcr.io

docker login -u USER -p PASSWORD ghcr.io

Finally, the image is to be push remotely

docker push ghcr.io/USER/HDL:RELEASE

test

You’ll may test this image using the neorv32 project, for example, with:

docker pull ghcr.io/csantosb/hdl
docker run -ti ID bash
git clone --depth=1 https://github.com/stnolting/neorv32
cd neorv32
git clone --depth=1 https://github.com/stnolting/neorv32-vunit test
cd test
rm -rf neorv32
ln -sf ../../neorv32 neorv32
python3 sim/run.py --ci-mode -v
 
Read more...

from Ducks

From 49.12.82.250 to 195.201.173.222 Lots of domains moved , both ips in Hetzner space. Many of the domanis are fake crypto investing sites #cryptoscam. And other scam sites.

 
Read more...

from Бележник | Notеs

R hslfow szev nvmgrlmvw yvuliv, gszg, rm gsv zfgfnm lu gsv kivxvwrmt bvzi, R szw ulin'w nlhg lu nb rmtvmrlfh zxjfzrmgzmxv rmgl z xofy lu nfgfzo rnkilevnvmg, dsrxs dzh xzoovw gsv Qfmgl; dv nvg lm Uirwzb vevmrmth. Gsv ifovh gszg R wivd fk ivjfrivw gszg vevib nvnyvi, rm srh gfim, hslfow kilwfxv lmv li nliv jfvirvh lm zmb klrmg lu Nlizoh, Klorgrxh, li Mzgfizo Ksrolhlksb, gl yv wrhxfhh'w yb gsv xlnkzmb; zmw lmxv rm gsivv nlmgsh kilwfxv zmw ivzw zm vhhzb lu srh ldm dirgrmt, lm zmb hfyqvxg sv kovzhvw. Lfi wvyzgvh dviv gl yv fmwvi gsv wrivxgrlm lu z kivhrwvmg, zmw gl yv xlmwfxgvw rm gsv hrmxviv hkrirg lu rmjfrib zugvi gifgs, drgslfg ulmwmvhh uli wrhkfgv, li wvhriv lu erxglib; zmw, gl kivevmg dzings, zoo vckivhhrlmh lu klhrgrevmvhh rm lkrmrlmh, li wrivxg xlmgizwrxgrlm, dviv zugvi hlnv grnv nzwv xlmgizyzmw, zmw kilsryrgvw fmwvi hnzoo kvxfmrzib kvmzogrvh.

 
Read more...

from Бележник | Notеs

“Както водата, газта и електричеството идват отдалеч в нашето жилище с помощта на почти незабележимо движение на ръката, за да ни обслужат, така ще бъдем снабдявани с картини или с поредици от тонове, които ще се появяват с помощта на едно леко движение, почти знак, и също тъй ще ни напускат.”

 
Read more...

from Kevin Neely's Security Notes

I finally decided to move my #NextCloud instance from one that I had been operating on the #Vultr hosting service to my #HomeLab.

A note on Vultr: I am impressed with this service. I have used them for multiple projects and paid with various means, from credit card to #cryptocurrency for about 10 years and I cannot even remember a downtime that impacted me. (In fact, I think there was only one real downtime, which was planned, well-communicated, and didn’t impact me because my setup was fairly resilient). With a growing volume of data, and sufficient spare hardware that wasn’t doing anything, I decided to bring it in-house.

This is not going to be a full guide, as there are plenty of those, but I did run into some hurdles that may be common, especially if a pre-built Nextcloud instance was used. So this is meant to provide some color and augment the official and popular documentation.

Getting started

Plan out the migration

Migration Overview

Essentially, there are three high-level steps to this process 1. Build a new Nextcloud server in the homelab 2. Copy the configuration (1 file), database (1 backup file), apps (install apps), and data (all user files) over to the new system 3. Restore all the copied data to the new instance

Preparing to Migrate

  1. Start with the NextCloud official documentation for migrating to a different server as well as:
    1. Backing up Nextcloud
    2. and the restoring a server doc
  2. Check out Nicholas Henkey’s migrate Nextcloud to a new server blog post. This is very thorough and has some great detail if you’re not super familiar with Nextcloud (because you used a pre-built instance)
  3. For the new build:
    1. A full set of installation instructions, placing [Nextcloud behind an Nginx proxy](https://github.com/jameskimmel/Nextcloud_Ubuntu/blob/main/nextcloud_behind_NGINX_proxy.md.
    2. An older install document for Installing Nextcloud on Ubuntu with Redis, APCu, SSL & Apache

Migration

While the official documentation describes the basics, the following is the steps I recommend following. This is at a medium level, providing the details, but not the specific command-line arguments (mostly).

  1. Build the new server
    1. Use your favorite flavor of Linux (I used Debian, and these notes will reflect that)
      1. install all updates,
      2. install fail2ban or similar security if you’re exposing this to the Internet.
      3. name the new system the same as the outgoing server
    2. Download the Nextcloud install from the nextcloud download site and choose either:
      1. update the current system to the latest version of whatever major version your running, and then download latest-XX.tar.bz2 where ‘XX’ is your version
      2. identify your exact version and download it from nextcloud
    3. Install the dependencies (mariaDB, redis, php, apache, etc. etc.)
      1. note: if the source server is running nginx, I recommend sticking with that for simplicity, keeping in mind that only Apache is officially supported
    4. Unpack Nextcloud
    5. Validate that it’s working
    6. Place it into maintenance mode
  2. Backup the data

    1. If using multi-factor authentication, find your recovery codes or create new ones
    2. Place the server into maintenance mode
    3. Backup the database
    4. copy the database backup to a temporary location on the new server
  3. Restore the data

    1. Restore the database
    2. copy /path/to/nextcloud/config/config.php over the existing config.php
    3. rsync the data/ directory to the new server
      1. you can remove old logs in the data directory
      2. you may need to use an intermediary step, like a USB drive. It’s best if this is ext4 formatted so you can retain attributes
      3. the rsync options should include -Aaxr you may want -v and/or --progress to get a better feel for what’s going on
      4. if rsync-ing over ssh, the switch is -e ssh
    4. If you have installed any additional apps for your Nextcloud environment, rsync the apps/ directory in the same way as the data dir above
    5. Validate the permissions in your nextcloud, data, and apps directories. Fix as necessary, see the info Nicholas Henkey’s post (linked above) for commands
    6. Redirect your A or CNAME record to the new system
    7. Configure SSL on the new system
    8. Turn off maintenance mode
    9. Log in and test! :fingers-crossed:

Troubleshooting

Hopefully everything is working. Make sure to check the logs if something is broken.

log locations – the nextcloud.log in the data/ directory – the apache logs in /var/log/apache2 – the redis logs in /var/log/redis – the system logs, accessible with journalctl

Reiterating: Remember or check for these items

These are the specific notes I took as I ran into problems that I had to work around or solve. These are incorporated in the above, so this is basically a restatement of the gotchas I ran into:

  • upgrade the current one to the latest version of the current release (i.e. the latest of the major version you are on, so if you were running 29.0.3, get to 29.0.9)
    • this makes it easier when you download <version>-latest.tar.bz2
    • If you’d prefer to skip that, use the nextcloud download site with all available versions. Make sure to grab the same one and compare the specific version as listed in config.php. Example: 'version' => '29.0.9.2',
  • use the same name on the new server
  • use the same web server. Apache is officially supported, but if you’re using nginx, it will be easier to stay on that.
  • Most multi-factor authentication, like WebAuthN, FIDO hardware keys, etc. will not work over HTTP in the clear.
    • IOW: make sure you have recovery codes
  • If the apps aren’t copied over, the new server sees them as installed rather than installable. I suppose one could “delete” or remove them in the admin GUI and then reinstall, but otherwise, there was no button to force a reinstall.
  • Files and data you need to copy over after creating the install. Do each of these separately, rather
    • if you have any additional apps, copy the apps/ directory over
    • copy config.php
    • copy the data/ directory
  • Is your current install using Redis-based transactional file locking?
    • If the previous system was using Redis and it is still in the configuration, the new system will not be able to obtain file-locking and essentially all users will be read-only and not able to modify or create new files.
    • In config.php, you will see settings such as 'redis' and 'memcache.locking' => '\\OC\\Memcache\\Redis',
    • make sure Redis is installed on the new system and running on the same port (or change the port in config.php)
    • Install the necessary software: apt install redis-server php-redis php-apcu
    • Ensure that the Redis and APCu settings in config.php are according to the documented single-server settings

The Memcache settings should look something like the following configuration snippet. Alternatively, you could enable and use the process socket.


'memcache.local' => '\OC\Memcache\APCu',
'memcache.distributed' => '\OC\Memcache\Redis',
'memcache.locking' => '\OC\Memcache\Redis',
'redis' => [
     'host' => 'localhost',
     'port' => 6379,
],
 
Read more...

from Kevin Neely's Security Notes

Nextcloud administration notes

These instructions and administrative notes were written for the pre-built Nextcloud provided by hosting provider Vultr. As a way to de- #Google my life and take back a bit of #privacy, I have been using a Vultr-hosted instance for a couple years now and it has run quite well. These notes are really aimed at the small instance for personal use. Please don’t use my notes if you’re responsible for an enterprise server!

Upgrading Nextcloud

#Nextcloud, with all it's PHP-based functionality, can become temperamental if not upgraded appropriately.  These are my notes to remind me how to now completely break things. When upgrading, the first pass will usually bring you to the most up-to-date version of Nextcloud in your major release, e.g. an instance running 27.1.4 would be brought up to 27.1.11. Running the script again would bring the instance to 28.0.x.

To update a Nextcloud server running on the #Vultr service to the latest version, you need to follow the steps below:

  1. Backup your Nextcloud data: Before starting any update process, it's always a good idea to create a backup of your Nextcloud data. This will ensure that you can restore your data in case of any unexpected issues during the update process.
    1. Shutdown the OS with shutdown -h now
    2. Power down the instance in Vultr
    3. Create a snapshot
    4. Wait
    5. Wait some more – depending on how much data is hosted on the system
    6. Power it back up
  2. SSH into the Vultr server: To update the Nextcloud server, you need to access the server using SSH. You can use an SSH client such as PuTTY to connect to the Vultr server.
  3. Switch to the Nextcloud user: Once you are logged in, switch to the Nextcloud user using the following command: sudo su -s /bin/bash www-data.
  4. Navigate to the Nextcloud directory: Navigate to the Nextcloud directory using the following command: cd/var/www/html  (could be /var/www/nextcloud or other.  Check what's in use)
  5. Stop the Nextcloud service: To avoid any conflicts during the update process, stop the Nextcloud service using the following command (as www-data): php occ maintenance:mode --on 
  6. Update the Nextcloud server: To update the Nextcloud server, you need to run the following command(as www-data): php updater/updater.phar. This will start the update process and download the latest version of Nextcloud.
  7. Update the OS, as needed, with apt upgrade
  8. Start the Nextcloud service: Once the update is complete and verified, you can start the Nextcloud service using the following command: sudo -u www-data php occ maintenance:mode --off.
  9. Verify the update: After the update process is complete, you can verify the update by accessing the Nextcloud login page. You should see the latest version of Nextcloud listed on the login page.
  10. Assuming all is running smoothly, the snapshot that was created in step 1 can be safely deleted. Otherwise, they accrue charges on the order of pennies / gigabyte / day.

Some other notes

Remove files in the trash

When a user deletes files, it can take a long time from them to actually disappear from the server.

root@cloud:/var/www/html# sudo -u www-data php -f /var/www/html/cron.php root@cloud:/var/www/html# sudo -u www-data php occ config:app:delete files_trashbin background_job_expire_trash

Set files to expire

root@cloud:/var/www/html# sudo -u www-data php occ config:app:set —value=yes iles_trashbin background_job_expire_trash

 
Read more...

from Sirius

O historiador grego do século I a.C., Diodoro, é considerado um compilador de fontes antigas, dentre elas alguns dos ensinamentos de Demócrito de Abdera. Em sua obra, Biblioteca de História (Tomo I, Capítulo 8), encontramos um relato da origem dos seres vivos e dos primeiros homens, que são atribuídos aos ensinamentos de Demócrito por especialistas como Diels, Vlastos, Reinhardt e Beresford. Dando início a meus estudos sobre Protágoras que, como discípulo de Demócrito, compartilhava com ele algumas concepções naturalistas e humanistas, apresento uma tradução do relato da pré-história de Diodoro. Felizmente a obra Biblioteca de História, de Diodoro, foi disponibilizada em inglês pela Universidade de Chicago nesse site.

Transcrevo a seguir o relato dos primeiros homens de Diodoro, como texto inicial para o estudo da conexão do pensamento de Demócrito com o de Protágoras (inclusive as semelhanças e diferenças com o mito de Prometeu e Epimeteu, atribuído a Protágoras no diálogo homônimo, de Platão):

Relato da pré-história de Diodoro

(…) os primeiros homens a nascer (…) levavam uma vida indisciplinada e bestial, saindo um a um para garantir sua subsistência e alimentando-se tanto das ervas mais tenras quanto dos frutos das árvores selvagens. Então, como foram atacados pelas feras, vieram em auxílio uns dos outros, sendo instruídos pela necessidade, e, quando se reuniram dessa maneira devido ao medo, gradualmente começaram a reconhecer suas características mútuas. E embora os sons que produziam fossem no início incompreensíveis e indistintos, aos poucos conseguiram articular sua fala, e, ao concordar entre si sobre símbolos para cada coisa que se apresentava a eles, tornaram conhecido entre si o significado que deveria ser atribuído a cada termo. Mas, como grupos desse tipo surgiram por todas as partes do mundo habitado, nem todos os homens tinham a mesma linguagem, uma vez que cada grupo organizou os elementos de sua fala por mero acaso. Esta é a explicação da existência atual de todos os tipos concebíveis de linguagem e, além disso, a partir desses primeiros grupos formados surgiram todas as nações originais do mundo.

Agora, os primeiros homens, uma vez que nenhuma das coisas úteis para a vida havia sido descoberta ainda, levavam uma existência miserável, não tendo roupas para se cobrir, não sabendo o uso de habitações e fogo, e também sendo totalmente ignorantes de alimentos cultivados. Pois como também negligenciaram até mesmo a colheita dos alimentos selvagens, não acumularam nenhum estoque de seus frutos contra suas necessidades; consequentemente, um grande número deles pereceu nos invernos devido ao frio e à falta de alimentos. Pouco a pouco, no entanto, a experiência os ensinou tanto a buscar as cavernas no inverno quanto a armazenar os frutos que podiam ser preservados. E quando se familiarizaram com o fogo e outras coisas úteis, as artes também e tudo o que é capaz de promover a vida social do homem foram gradualmente descobertos. De fato, falando de modo geral, em todas as coisas foi a própria necessidade que se tornou a professora do homem, fornecendo de maneira apropriada instrução em todos os assuntos a uma criatura que foi bem dotada pela natureza e que tinha, como assistentes para todos os propósitos, mãos, logos (razão) e anchinoia (sagacidade mental).

E no que diz respeito à primeira origem dos homens e seu modo de vida mais primitivo, nos contentaremos com o que foi dito, uma vez que desejamos manter a devida proporção em nosso relato.

#Filosofia #Demócrito #Protágoras

 
Leia mais...

from Tai Lam in Science

I need to figure out how to reasonably deal mail and deliveries privately.

How it started

I donated to a local nonprofit in 2024, and I really shouldn't say this, but I honestly wish I never did. However, this is not due to a reason you probably expect.

I started to receive significantly more junk mail from charitable nonprofits and groups, more so than usual (at least since the 2020 COVID-19 pandemic). I won't name specific names, but this was a local nonprofit which has a total annual budget size between the order of $1 million and $10 million.

(To the reader: if we know each other IRL, then I'll tell you who the offending org is; and if your savvy with implementing an actionable fix with the issue below, then maybe we can work out a way for me to get out of this rut of a “situation” — as if this is or should be by highest priority project to take on right now. Let's just say that some of you will be surprised by the org I have in mind, which either intentionally uses the services of data brokers, or at least has some heuristic workflow that is leaking donor info to data brokers. The overall situation has a bit of a tragic irony.)

I'm (usually) not a vengeful person, at least when it comes to nonprofit orgs genuinely acting in good faith; but I am keeping a running list of these others orgs that engage in buying/selling/sharing snail mail lists as orgs I won't donate money to in the future, due to their respective disregard for mail privacy. However, there are 3 national-level orgs that have (so far) never sold out to physical mail lists: the ACLU, including state chapters; the EFF; and the Freedom of the Press Foundation. I am purposefully excluding comparatively technical groups that would respect the privacy and security of others in general, such as the Signal Foundation and The Tor Project.

On the other hand, the only other way to avoid excessive physical mail list tracking is to donate to small local nonprofits. (Any method is fine — if you're super concerned about protecting your membership info, using a PO box for your mailing address and renewing your member dues via paper check is more than sufficient for most local community members.) This is because these groups literally don't have the money to spend for mass mail solicitations or blanket marketing.

After this happened, I expressed to a local activist about how I'm going to go straight for a paid plan on Privacy.com (at least the lower tier) and skip the free plan. Additionally, I commented that I reaction was essentially the “I can't believe you've done this” meme. (Somehow, I was initially confused this with the “Charlie bit my finger” meme.)

How it's going (and the future)

I no longer think it's safe for me to order computers and ship the delivery to my residential address, using my own debit card. (That does remind me – I really should get a credit card for better payment protection and everything else that encompasses.)

I remembered that I ordered the HP Dev One in 2022 and the box's outer shipping box wasn't even taped closed when it arrived on my doorstep. Due to my living situation since 2020, I no longer trust anything that goes through the mail, and after Andrew “bunnie” Huang's assessment of overall supply chain security after the 2024 exploding pager incident in Lebanon, I think it's about high time I figure out the logistics of shipping to a private mail box (PMB) – or maybe I use a friend's address and/or credit card to purchase an online only computer (while I pay my friend for the cost, of course).

However, quite a few large computer manufacturers, who primarily have B2B (business-to-business) though also some minor B2C (business-to-consumer) sales, will tell customers that sending deliveries to a PO Box is not allowed during checkout. This includes Lenovo, HP, and even Framework. (I have to double check for System76.) This is partly why I was sad when Costco no longer sold any in-store ThinkPad laptops anymore (one probable cause might be the pandemic, but that's another matter).

If you have any somewhat serious considerations to become a Linux distro maintainer or even a package manager (such as the AUR/MPR), you should at least consider this while threat modeling. I recall Ariadne Conill tweeting about how a Lenovo ThinkPad laptop that they tried ordering online was suspiciously redirected to Langely, Virginia while en route to their home in early 2022, which was symptomatic of mail interdiction. However, those tweets were deleted around late 2022 or early 2023.

 
Read more...

from lobster

There is always something new to try... https://soapbox.pub/servers/

BUT I am now a concentrate and focus. Too much candy? Too many ideas and possibilities? It all depends on the priorities we need. In other words what is your hat colour? Black, white, grey or red? No hats for me, not even green or hoody.

Security for me is transparency or zero preference. Otherwise I am spending all my time on noise and “AI” generated attempts to fathom my rousing browsing. I am already using too many browsers, except TOR. Which is one rocky peek too many.

Slow too. Too slow. Like my keyboard. Old and clunky. Noisy and dusty. Good enough...

 
Read more...

from Tai Lam in Science

There was a guide from early 2023 on what to change in the default KDF settings of Bitwarden.

(The guide has been saved on the Wayback Machine and archive.today.)

You must log in via browser to edit these settings. (Neither the desktop apps nor the mobile apps can change the following settings.)

  1. From the main screen in Bitwarden, navigate through the following menus: Security (vertical menu) > Keys (horizontal)
  2. Select Argon2id for “KDF algorithm” and enter 10 for “KDF iterations”.
  3. Enter 64 for “KDF memory (MB)” and 8 for “KDF parallelism” (number of threads).
  4. If you changed any settings, then click on the “Change KDF” button to save any changes (and Bitwarden will log you out of your account on all devices).
    • Otherwise, if no changes were made, then you can leave the “Keys” menu.

Personal context

I need to make sure I have something I can reference when I set up organization accounts on Bitwarden for colleagues and friends.

I vaguely remember that this was discussed roughly around the same about how the default KDF for LUKS (full disk encryption on Linux) was set up. Back in April-May 2023, the sources for episode 132 of the the Surveillance Report podcast was released during the time when the podcast released roughly biweekly – so the podcast lagged at least 1-2 weeks behind current events.

This forum thread helped to date this news story, as well as this assessment.

 
Read more...

from lobster

Remember KISS? Keep It Simple Stewpit,

We do not have to spread ourselves thinly. We can rely on the wheel being invented. We can focus on less but better and complete and cooperate and merge efforts. That is why I trust my experience and others who are offering real services I need. Real alternatives. Really simple. Really.

 
Read more...

from beverageNotes

This evening it's Old Granddad 114. I picked it up at Costco for under $30. I've heard good things about it, so I thought it was time to try it.

The proof makes it hot, so I'm having it with some ice. On the nose, I'm getting maple, chipotle, and maybe some anise. I don't notice anything right away on the tongue, but the maple shows up with some cinnamon. The heat, along with the flavors, lingers on the tongue. There's briefly a hint of anise later. The heat sticks around and follows the swallow and hangs around.

I'm kind of reminded of whisky's that have been finished in amburana casks, but the maple isn't quite as strong.

We'll see how the second dram this evening goes...

 
Read more...

from lobster

Dear Blog friends,

Please forgive my ignorant rambling. My first post is an intro I tended, like all of us, in my Puppy Linux days, to run quite happily as root on my personal computer.

These days random password generators are driving me mad. As for key safes, prefer writing down on sticky notes. BUT changing passwords are another unnecessary, well for me anyways.

My last Puppy Linux computer still has a random noise generator, written in javascript (not by me). It opens random web sites from the background, to obscure my browsing. Probably old black hat now...

I expect a Chinese Turing multiprocessor eventually or something retro but still fast for future reference.

End of ramble. As you were.

 
Read more...

from Ducks

More and more sites popping up. Some results from urlscan.io as of today (8. nov. 2024): advokatiks.info advokats.blog advokats.info canada-pol.best canada-pol.biz canada-pol.site cyber-payback.pro cyber-police.site cyberfundreturn.pics cyberfundreturn.pro cyberreturnfund.digital cyberpl.info digital-recover.cyou digital-recovery.autos digital-recover.best digital-recovery.best digital-recovery.blog digital-recovery.bond digital-recovery.site digital-recovery.xyz digitalrecovery.autos digitalrecovery.cam digitalrecovery.site digitalrefund.apicil.group euro-pol.art euro-polc.blog euro-polc.site europol-eu.com europol-police.pro europol-refund.info europolonline.net germam-pol.xyz german-police.blog germanic-pol.auction gretcomp-invest.com gretcomp-invest.com interfundreturned.digital internet-cyberpolice.network queenscreekcapital.com refunds-money.site secureinvestments.cfd uk-advokats.site uk-pol.site Some of those are probably gone when you read this.

If you are registered at urlscan.io, here is a list with “dynamic” results based on one common file : https://urlscan.io/search/#filename:%22bg-important2.png%22 There are some duplicates and maybe a few not related. And there is probably better ways to find more related domains.

One example of whois info. Somehow I mistrust the registrant info, one may wonder about globaldomaingroup.com and its resellers. They seem to be involved in several of these domains. This domain was registered on Sept. 24 this year and is still alive as of Nov. 8 (2024): whois advokatiks.info (some info skipped for readability) organisation: Identity Digital Limited (included in administrative contact info) contact: administrative name: Vice President, Engineering organisation: Identity Digital Limited address: 10500 NE 8th Street, Suite 750 address: Bellevue WA 98004 address: United States of America (the) phone: +1.425.298.2200 fax-no: +1.425.671.0020 e-mail: tldadmin@identity.digital contact: technical (included in administrative contact info) nserver: A0.INFO.AFILIAS-NST.INFO 199.254.31.1 2001:500:19:0:0:0:0:1 nserver: A2.INFO.AFILIAS-NST.INFO 199.249.113.1 2001:500:41:0:0:0:0:1 nserver: B0.INFO.AFILIAS-NST.ORG 199.254.48.1 2001:500:1a:0:0:0:0:1 nserver: B2.INFO.AFILIAS-NST.ORG 199.249.121.1 2001:500:49:0:0:0:0:1 nserver: C0.INFO.AFILIAS-NST.INFO 199.254.49.1 2001:500:1b:0:0:0:0:1 nserver: D0.INFO.AFILIAS-NST.ORG 199.254.50.1 2001:500:1c:0:0:0:0:1 ds-rdata: 5104 8 2 1af7548a8d3e2950c20303757df9390c26cfa39e26c8b6a8f6c8b1e72dd8f744 whois: whois.nic.info whois.globaldomaingroup.com Domain Name: ADVOKATIKS.INFO Registry Domain ID: 977211288a584007a5ea216ae869c497-DONUTS Registrar WHOIS Server: whois.globaldomaingroup.com Registrar URL: http://www.globaldomaingroup.com Updated Date: 2024-09-25T09:24:07.0Z Creation Date: 2024-09-24T15:36:20.0Z Registrar Registration Expiration Date: 2025-09-24T15:36:20.0Z Registrar: Global Domain Group LLC Registrar IANA ID: 3956 Registrar Abuse Contact Email: abuse@globaldomaingroup.com Registrar Abuse Contact Phone: +1.8053943992 Reseller: Andro Givan Registry Registrant ID: C-1408273 Registrant Name: Anya Cruk Registrant Street: Сумы Registrant City: Суми Registrant State/Province: Сумська область Registrant Postal Code: 01001 Registrant Country: UA Registrant Phone: +380.508445774 Registrant Email: hasladus@gmail.com Registry Admin ID: C-1408275

(admin/tech info same as Registrant info)

Name Server: daniella.ns.cloudflare.com Name Server: milan.ns.cloudflare.com DNSSEC: unsigned >>> Last update of WHOIS database: 2024-09-25 02:24:07 -0700 <<<

And one may also wonder a bit about Cloudflare: ~ % dig advokatiks.info ;; ANSWER SECTION: advokatiks.info. 300 IN A 172.67.170.22 advokatiks.info. 300 IN A 104.21.39.85 ;; WHEN: Fri Nov 08 2024

 
Read more...