Kevin Neely's Security Notes

A place where I can post security-related long-form thoughts, notes, and articles

A resume workflow from neurond.com Image: a typical resume content extraction workflow from neurond.com

I used to keep my résumé (from here, “resume”) very up-to-date. For a long time, I had a resume crafted in #LaTeX because I have a long history with using that typesetting and markup language for purposes other than the ones most people think of, e.g. I wrote my college English papers in it, I had a slew of templates I created while I was a practicing attorney that would create letters, motions, and envelopes from source .tex files, etc. Keeping content in text makes it more portable across platforms and applications, and the nature of Microsoft Word is that you need to fully re-create the resume every couple years because some invisible formatting munges the entire document.

TL;DR I ended up using RenderCV as mentioned below in the [[Resume Workflow#RenderCV|RenderCV section]].

In the time since I last relied upon a resume, the method of applying for jobs –and more importantly, how recruiters review submissions– has changed pretty drastically. And despite all the great advances in technology over the past ten years, apparently, HR systems still are not that great at parsing a PDF or Word doc into text that can be machine-read by whatever algorithms and/or AI they’re using to perform the first pass. Because of this, you want to make sure to submit a machine-friendly description of your experience. There really should be a standard for all this stuff that makes it easy on both the applicant and the hiring manager. Like, I don’t know, some sort of HR standards body or something. A standard has never emerged, and I suspect that LinkedIn has a lot to do with that.

Additionally, having an easy way to keep one’s resume in sync and in multiple formats means that it can be quickly used for many purposes, from printing an attractive hard copy to piping it through some [[Fabric]] AI workflows. So this set me on a fairly long hunt for a system where I could write once, and generate in multiple formats.

The search for a resume workflow

First round

LaTeX & Pandoc

Since my resume was already in LaTeX, using the 20 second CV set of template –which I think is very nice– I went and updated that and then ran it through pandoc, which is a multi-format document converter. The results ended up being pretty poor and not useful. The PDF looked great, obviously, but pandoc did not understand the LaTeX very well and the Markdown required a lot of edits.

We want everything to look good upon compilation/export/save as/whatever, so this was not an option.

Interlude

I had kind of given up at this point, figuring I either needed to just go Google Docs or maintain a Markdown version and attempt to keep them in sync. Then, I came across a post about an auto-application bot and the author had a related project that used resume information formatted as YAML to create a specific resume based upon job description or LinkedIn post.

Resume from Job Description

This project is called resume render from job description (no cute animal names or obtuse references in this project!), and I gave it a try, but it appeared to require all the fields, including e.g. GPA. I don’t know about you, but I'm way past the point in my career where I'm putting my GPA on a resume, so it wasn’t that useful.

It was late on a Thursday night, so obviously it was time to look a bit further into the rabbit hole

Online options

I found a number of projects that were a service model where they host and render the resume for you. These included resume.lol (I question the naming choice here), Reactive resume (opensource, excellent domain name, and it has nice documentation), and WTF resume (my thought exactly!).

These all came from a post of 14 Open-source Free Resume Builder and CV Generator Apps.

JSONResume

As I traveled further down the Internet search rabbit hole, I came across JSON Resume, an #opensource project with a hosting component where people craft their resumes in JSON and it can then render in a number of formats either via a command-line tool or within their hosted service, making it a kind of hybrid option.

At this point, I felt like I was almost there, but it wasn’t exactly what I wanted. JSONResume is very focused around being part of their ecosystem and publishing within their hosting ecosystem. The original #CLI tool is no longer maintained, and a new one is being worked on, which appears minimal but sufficient for the task. A nice thing is that they have some add-ons and have created a sort of ecosystem of tools. Looking over the project’s 10 year history, those tools have a tendency to come and go, but such is the nature of OSS.

The Award for “Project Most Suited to My Workflow” goes to….

Another great thing about JSON Resume is that they, i.e. Thomas Davis, have done a fantastic job of cataloging various resume systems out there in their JSON Resume projects section. There is so much interesting stuff here –and a lot of duplicative effort ahem see the “HR Standards” comment above– that you can spend a couple days looking for the project that best fits your needs. For me, I landed on RenderCV, which is not only in the bibliography, but also mentioned on the Getting Started page because there are tools to leverage JSON Resume from RenderCV!

So without further ado…

RenderCV

While RenderCV is a part of the JSON Resume ecosystem, in that people have created scripts to convert from the latter to the former, it is a completely separate and standalone project. Written in #python and installable via pip. RenderCV’s approach is to leverage a YAML file, and from that generate consistent resumes in PDF, HTTML, Markdown, and even individual PNG files, allowing the applicant to meet whatever arcane requirements the prospective employer has.

graph LR

	YAML --> TeX & Markdown 
	TeX --> PDF & HTML & PNG

Resume generation workflow

Using RenderCV

Getting started with RenderCV is like pretty much any other project built in python

  1. Create a virtual environment using venv or conda, e.g. conda create -n renderCV python=3.12.4
  2. Install via pip with a simple command pip install rendercv
  3. Follow the quick start guide and create a YAML file with your information in it
  4. Run rendercv render <my_cv>.yaml
  5. View the lovely rendered résumé

Extending RenderCV

This was great, as I now have a very easy-to-edit source document for my résumé and can quickly create others. I’m hoping Sina, the author, makes the framework a bit more extensible in the future because the current templates are oriented toward people with STEM backgrounds looking for individual contributor roles. However, as some of us move further in our careers, the résumé should be less about skills and projects, but more about responsibilities and accomplishments as we lead teams. I have enhanced the “classic” and “sb2nov” themes so that they take these keywords as subsections to a specific company/role combination under the professional_experience section.

Theme update for Leaders and Managers

I created a fork which contains updates to v1.14, adding the “Responsibilities” and “Accomplishments” subsections for company: under the Experience section.
This allows leaders to craft their resume or CV in such a way that it highlights the breadth of their influence and impact to the organization.

The following themes support the additional subsections: – markdown – classic – sb2nov

A non-updated theme will simply ignore the content under these subsections; omitting these sections will make the resume look like the original theme. Hopefully the framework will be more extensible in the future and I can add this as a pull request.
In the meantime, the forked repo at https://github.com/ktneely/rendercv4leaders should work on its own, or the /ExperienceEntry.j2.tex and /ExperienceEntry.j2.md files from those themes can simply be copied over the existing.

How to use

Usage is extremely straightforward, as this merely extends the framework with a couple new keywords for the Experience section and looking for a preceding company declaration. Here is an example:

professional_experience:
  - company: NASA
	position: Director of Flight Operations
	location: Houston, TX
	start_date: 1957-03
	end_date: 1964-06
	responsibilities:
	  - Manage the Control room.
	  - Write performance reports.
	  - Smoke copious amounts of cigarettes
	accomplishments:
	  - 100% staff retention over the course of 9 rocket launches.
	  - Mobilized and orchestrated multiple teams to rescue astronauts trapped in space.
	  - Lung cancer.

This will then render “responsibilities” and “accomplishments” as italicized sections under the job role, highlighting what a difference made while performing in that role.

Maintaining Multiple Versions

This is basically what it all comes down to: the ability to maintain different versions for your target companies. While some work is being done to modularize the source content, it is not yet to the point where each section of the resume is a building block that can be invoked at compile time. What I do is maintain different YAML files and use the parameters in the rendercv_settings section to direct the output to different, meaningfully-named directories while maintaining a generic name for the file itself.

So, instead of “Kevin-LargeCorprole.pdf”, “Kevin-Startuprole.pdf”, etc., I simply send “Kevin-CV.pdf”. This way, it’s not incredibly obvious to the reviewer that I have specially-crafted a resume for that job, it just happens to look like I have exactly what they’re looking for in my default resume.

Automation

Want to automate the build of your resume whenever you update the source file(s)? Look no further than rendercv pipeline to generate the output whenever you commit source to GitHub.

Also, since version 1.15, the --watch flag will watch the source file locally and re-compile every time you save the source YAML file.

References and further exploration

  1. Neurond.com blog post: What is a CV/Resume Parser and How Does it Work?, Trinh Nguyen, Aug 16, 2022.
  2. TeXMaker: an Open-source TeX editor
  3. RenderCV user guide

I finally decided to move my #NextCloud instance from one that I had been operating on the #Vultr hosting service to my #HomeLab.

A note on Vultr: I am impressed with this service. I have used them for multiple projects and paid with various means, from credit card to #cryptocurrency for about 10 years and I cannot even remember a downtime that impacted me. (In fact, I think there was only one real downtime, which was planned, well-communicated, and didn’t impact me because my setup was fairly resilient). With a growing volume of data, and sufficient spare hardware that wasn’t doing anything, I decided to bring it in-house.

This is not going to be a full guide, as there are plenty of those, but I did run into some hurdles that may be common, especially if a pre-built Nextcloud instance was used. So this is meant to provide some color and augment the official and popular documentation.

Getting started

Plan out the migration

Migration Overview

Essentially, there are three high-level steps to this process 1. Build a new Nextcloud server in the homelab 2. Copy the configuration (1 file), database (1 backup file), apps (install apps), and data (all user files) over to the new system 3. Restore all the copied data to the new instance

Preparing to Migrate

  1. Start with the NextCloud official documentation for migrating to a different server as well as:
    1. Backing up Nextcloud
    2. and the restoring a server doc
  2. Check out Nicholas Henkey’s migrate Nextcloud to a new server blog post. This is very thorough and has some great detail if you’re not super familiar with Nextcloud (because you used a pre-built instance)
  3. For the new build:
    1. A full set of installation instructions, placing [Nextcloud behind an Nginx proxy](https://github.com/jameskimmel/Nextcloud_Ubuntu/blob/main/nextcloud_behind_NGINX_proxy.md.
    2. An older install document for Installing Nextcloud on Ubuntu with Redis, APCu, SSL & Apache

Migration

While the official documentation describes the basics, the following is the steps I recommend following. This is at a medium level, providing the details, but not the specific command-line arguments (mostly).

  1. Build the new server
    1. Use your favorite flavor of Linux (I used Debian, and these notes will reflect that)
      1. install all updates,
      2. install fail2ban or similar security if you’re exposing this to the Internet.
      3. name the new system the same as the outgoing server
    2. Download the Nextcloud install from the nextcloud download site and choose either:
      1. update the current system to the latest version of whatever major version your running, and then download latest-XX.tar.bz2 where ‘XX’ is your version
      2. identify your exact version and download it from nextcloud
    3. Install the dependencies (mariaDB, redis, php, apache, etc. etc.)
      1. note: if the source server is running nginx, I recommend sticking with that for simplicity, keeping in mind that only Apache is officially supported
    4. Unpack Nextcloud
    5. Validate that it’s working
    6. Place it into maintenance mode
  2. Backup the data

    1. If using multi-factor authentication, find your recovery codes or create new ones
    2. Place the server into maintenance mode
    3. Backup the database
    4. copy the database backup to a temporary location on the new server
  3. Restore the data

    1. Restore the database
    2. copy /path/to/nextcloud/config/config.php over the existing config.php
    3. rsync the data/ directory to the new server
      1. you can remove old logs in the data directory
      2. you may need to use an intermediary step, like a USB drive. It’s best if this is ext4 formatted so you can retain attributes
      3. the rsync options should include -Aaxr you may want -v and/or --progress to get a better feel for what’s going on
      4. if rsync-ing over ssh, the switch is -e ssh
    4. If you have installed any additional apps for your Nextcloud environment, rsync the apps/ directory in the same way as the data dir above
    5. Validate the permissions in your nextcloud, data, and apps directories. Fix as necessary, see the info Nicholas Henkey’s post (linked above) for commands
    6. Redirect your A or CNAME record to the new system
    7. Configure SSL on the new system
    8. Turn off maintenance mode
    9. Log in and test! :fingers-crossed:

Troubleshooting

Hopefully everything is working. Make sure to check the logs if something is broken.

log locations – the nextcloud.log in the data/ directory – the apache logs in /var/log/apache2 – the redis logs in /var/log/redis – the system logs, accessible with journalctl

Reiterating: Remember or check for these items

These are the specific notes I took as I ran into problems that I had to work around or solve. These are incorporated in the above, so this is basically a restatement of the gotchas I ran into:

  • upgrade the current one to the latest version of the current release (i.e. the latest of the major version you are on, so if you were running 29.0.3, get to 29.0.9)
    • this makes it easier when you download <version>-latest.tar.bz2
    • If you’d prefer to skip that, use the nextcloud download site with all available versions. Make sure to grab the same one and compare the specific version as listed in config.php. Example: 'version' => '29.0.9.2',
  • use the same name on the new server
  • use the same web server. Apache is officially supported, but if you’re using nginx, it will be easier to stay on that.
  • Most multi-factor authentication, like WebAuthN, FIDO hardware keys, etc. will not work over HTTP in the clear.
    • IOW: make sure you have recovery codes
  • If the apps aren’t copied over, the new server sees them as installed rather than installable. I suppose one could “delete” or remove them in the admin GUI and then reinstall, but otherwise, there was no button to force a reinstall.
  • Files and data you need to copy over after creating the install. Do each of these separately, rather
    • if you have any additional apps, copy the apps/ directory over
    • copy config.php
    • copy the data/ directory
  • Is your current install using Redis-based transactional file locking?
    • If the previous system was using Redis and it is still in the configuration, the new system will not be able to obtain file-locking and essentially all users will be read-only and not able to modify or create new files.
    • In config.php, you will see settings such as 'redis' and 'memcache.locking' => '\\OC\\Memcache\\Redis',
    • make sure Redis is installed on the new system and running on the same port (or change the port in config.php)
    • Install the necessary software: apt install redis-server php-redis php-apcu
    • Ensure that the Redis and APCu settings in config.php are according to the documented single-server settings

The Memcache settings should look something like the following configuration snippet. Alternatively, you could enable and use the process socket.


'memcache.local' => '\OC\Memcache\APCu',
'memcache.distributed' => '\OC\Memcache\Redis',
'memcache.locking' => '\OC\Memcache\Redis',
'redis' => [
     'host' => 'localhost',
     'port' => 6379,
],

Nextcloud administration notes

These instructions and administrative notes were written for the pre-built Nextcloud provided by hosting provider Vultr. As a way to de- #Google my life and take back a bit of #privacy, I have been using a Vultr-hosted instance for a couple years now and it has run quite well. These notes are really aimed at the small instance for personal use. Please don’t use my notes if you’re responsible for an enterprise server!

Upgrading Nextcloud

#Nextcloud, with all it's PHP-based functionality, can become temperamental if not upgraded appropriately.  These are my notes to remind me how to now completely break things. When upgrading, the first pass will usually bring you to the most up-to-date version of Nextcloud in your major release, e.g. an instance running 27.1.4 would be brought up to 27.1.11. Running the script again would bring the instance to 28.0.x.

To update a Nextcloud server running on the #Vultr service to the latest version, you need to follow the steps below:

  1. Backup your Nextcloud data: Before starting any update process, it's always a good idea to create a backup of your Nextcloud data. This will ensure that you can restore your data in case of any unexpected issues during the update process.
    1. Shutdown the OS with shutdown -h now
    2. Power down the instance in Vultr
    3. Create a snapshot
    4. Wait
    5. Wait some more – depending on how much data is hosted on the system
    6. Power it back up
  2. SSH into the Vultr server: To update the Nextcloud server, you need to access the server using SSH. You can use an SSH client such as PuTTY to connect to the Vultr server.
  3. Switch to the Nextcloud user: Once you are logged in, switch to the Nextcloud user using the following command: sudo su -s /bin/bash www-data.
  4. Navigate to the Nextcloud directory: Navigate to the Nextcloud directory using the following command: cd/var/www/html  (could be /var/www/nextcloud or other.  Check what's in use)
  5. Stop the Nextcloud service: To avoid any conflicts during the update process, stop the Nextcloud service using the following command (as www-data): php occ maintenance:mode --on 
  6. Update the Nextcloud server: To update the Nextcloud server, you need to run the following command(as www-data): php updater/updater.phar. This will start the update process and download the latest version of Nextcloud.
  7. Update the OS, as needed, with apt upgrade
  8. Start the Nextcloud service: Once the update is complete and verified, you can start the Nextcloud service using the following command: sudo -u www-data php occ maintenance:mode --off.
  9. Verify the update: After the update process is complete, you can verify the update by accessing the Nextcloud login page. You should see the latest version of Nextcloud listed on the login page.
  10. Assuming all is running smoothly, the snapshot that was created in step 1 can be safely deleted. Otherwise, they accrue charges on the order of pennies / gigabyte / day.

Some other notes

Remove files in the trash

When a user deletes files, it can take a long time from them to actually disappear from the server.

root@cloud:/var/www/html# sudo -u www-data php -f /var/www/html/cron.php root@cloud:/var/www/html# sudo -u www-data php occ config:app:delete files_trashbin background_job_expire_trash

Set files to expire

root@cloud:/var/www/html# sudo -u www-data php occ config:app:set —value=yes iles_trashbin background_job_expire_trash

I’ve been a “dabbler” with crewAI for a while now, having come across it in March of 2024 and tinkering when I have some time. I love the idea of task-based LLM actions that are specific enough that you can use lower cost but specifically-trained models for the tasks, even running those models on your own hardware. A few weeks back, my team at work used CrewAI for a hackathon in an effort to automate an onerous process, and it came out pretty well!

So, when I saw that they have a couple official training videos on a new e-learning platform called DeepLearning.ai, I figured I’d check them out. #CrewAI is evolving rapidly, and the some of the notes I’ve taken over the past 8 months aren’t even applicable anymore, so I figured this was a great way to level-set and fill in gaps in my knowledge.

I’m not going to describe CrewAI here, other than it’s a framework for easily building multi-agent teams and requires little to no coding experience. How CrewAI works is well-explained in the first fifteen minutes of the course, so at least listen to that part!

About the Course

The course, Multi AI Agent Systems with crewAI, follows a flow familiar to anyone that has taken online courses, and is taught by the creator of crewAI, João Moura. The lessons, ranging from a minute to 18 minutes, are a mix of descriptive lecture and hands-on coding. For the lessons where coding is involved, the window handily split-screens and on one side is an iPython notebook environment with the code pre-populated, so you can work through it as João explains what the code does.

You can also get an indication of the course and CrewAI by checking out my crewAI experimentation repo on GitHub.

Target Audience Analysis

Professionals working in project management, artificial intelligence, and team leadership can greatly enhance their skills in constructing multi-agent teams. Those keen on optimizing team performance, utilizing state-of-the-art technologies for collaborative work, and streamlining task execution processes would discover value in enrolling in specialized online classes tailored to augment their proficiency in this realm. Addressing prevalent challenges like steering diverse teams, accomplishing project goals in intricate scenarios, and keeping pace with evolving team dynamics is indispensable for professionals aiming to excel in their respective roles.

Content Outline

Introduction to Multi-Agent Teams

Familiarizing with the basics of multi-agent teams and their significance in managing complex tasks effectively.

Importance of Building Multi-Agent Teams for Complex Tasks

Unveiling the reasons why seamless collaboration among agents is crucial for successful task fulfillment.

Strategies for Creating Effective Multi-Agent Teams

Scrutinizing established methodologies for assembling and overseeing high-performing multi-agent teams.

Multi-Agent Crew Performance Examples

The bulk of the course is working through five clear examples of building multi-agent systems with CrewAI. The result is a set of real-world instances where multi-agent teams can perform and achieve remarkable results across diverse problems.

write_article

The first example is super simple: take a task you might do with a single prompt with an #LLM chatbot, such as chatGPT, and have it performed by multple agents, each with their own persona. This performs no research and the output is purely from the LLM, making it an easy entrypoint for anyone. (Hint: I had it write an article about taking an online course for developing multi-agent teams, and even incorporated a bit of it into this.)     – this one required an update from the training to run on the latest version of crewai.     – older versions of crewai used an integer for the verbose declaration, and if you are running the latest, you need to change that to a binary, e.g. True.

This example is definitely one you’ll want to revisit after you learn how to use and assign tools to your agents in the following lessons.

customer_support

This creates a customer support agent that can answer questions about a specific product or service. It does this by accessing the URL with the support documentation.

customer_outreach

This example creates some marketing lead material to reach out to a specific company that is a potential customer or partner for a second, i.e. “your” company.

event_planning

This example uses multiple agents to research, identify, create the logistics, and then create some marketing material for an event. It takes in parameters like city, size, and budget in order to find a viable venue. – I believe it was this one where I had to fiddle with the asynchronicity of the agents, since I understand that CrewAI needs to have the last agent to perform a task be performing that itself. I could have that wrong, but I had to change that to make mine work.

This is actually a super-cool example, but I found that the LLMs did not adhere to the parameters, often getting venues too small or ignoring the input I would provide while they were performing their tasks. That’s to be expected, however, and I think experimentation is the name of the game when it comes to building these systems.

resume_builder

The final one was to have the agents create bespoke resumes, based upon the job one is applying for. As opposed to the event planning exercise, the output on this one was very good, and I was impressed with how well it could craft a resume for the specific job, as well as anticipate some of the interview questions and provide some hints for how to answer them.

Conclusion

This course provides a clear and thorough introduction to crewAI, bringing the attendees to an intermediate level of being able to use the framework. By immersing themselves in the intricacies of multi-agent team dynamics, professionals can acquire the requisite knowledge and proficiency to thrive in today's collaborative work settings. Embracing online classes tailored to address the subtleties of forming effective multi-agent teams represents a proactive stride towards honing essential skills and keeping abreast in the ever-dynamic professional sphere.

Critiques

  • The venue was too bright. Chillout rooms and talk tracks could have used a dimmer.
  • Speaking of the Chillout room, it was somewhat disappointing. (I’m talking about Chillout 2, as Chillout 1 felt like a giant hospital waiting room). I like a cavernous, dim, and ambient room for, you know, chilling out. #SomaFM was over in the hallway, the Chillout room had a live stage, and it was overall pretty small.

“Best-ofs”

These are the best things I personally saw or were close to. There’s so much going on that this just represents the best stuff I saw in my fractional DEFCON experience.

  • Best thing I learned: Gained a good bit of familiarity with InspectAI at the AI Village as a part of their CTF.
  • Best Talk: “Librarian in Broad Daylight: Fighting back against ever encroaching capitalism” by the Cyberpunk Librarian in the War Stories track.
  • Best Rant: Cory Doctorow on #enshittification
  • Best Tool or Technique: “MySQL honeypot that drops shells”
  • Best Research: “Watchers being watched: Exploiting the Surveillance System” in which the researchers exploited 4 different surveillance systems.
  • Best Real-World Impact: “Bastardo Grande: Hunting the Largest Black Market Bike Fence In The World” by Bryan Hance. Talk not up yet, see the related Wired article
  • Best Contest: There’s too many, but I loved the idea of Sn4ck3r, the machine that vends real items for captured flags.
  • Best Party: the 503 Party, of course!
  • Best Entertainment: DJ Jackalope brought an awesome set after Hacker Jeopardy. (and Skittish and Bus did a great job warming up the crowd just before)
  • Biggest Drama: the badge
  • Best Village: The Packet Hacking village due to the supreme DEFCON-y ambience and the well-run workshops they provided to people of all skill levels

Observations & Random Points

  1. I probably haven’t been to a main track talk in person for over 6 years. I decided to go to a few of them and really enjoyed the atmosphere. I’ll have to remember to put at least 2 on the agenda each year going forward.
  2. BlueTeam Village got a much larger space this year. I’m happy to see that, as they were nearly impossible to get into over at the Flamingo in recent years. BTV is doing good work and people should be able to experience it.
  3. There were a lot of contests.
  4. The Car-hacking village really brings it. They had a semi truck rig, a Rivian, and they gave away a Tesla. Well done, and my only ask is that we make it easier for people & mechanics to jail break their cars when the companies John Deere-ify them.

Next #DEFCON will be held Aug 7-10, 2025 at the LVCC. I hope to see you there!

Automatically creating new content from old

You know what I hate? Repetition. Still, a difficult lesson that I’ve leaned, through both experience and discussions with experts in marketing, is that repetition is the key to getting one’s message across. I like to believe that a single, coherent and cogent conveyance of thought is all it takes to deliver, for example, a new security model or change in policy, or whatever. In reality, the propagator of the message –in this case, the security team– has to deliver the message, announce it a second time, send an email, post a document on the intranet, have more meetings with the same people, make sure to cover the topic on various 1:1 meetings, etc. etc. ad infinitum.

And that is why I have been working with Daniel Miessler’s fabric framework to devise a prompt that can take variations of the same messaging and create new, yet fresh, versions of the same message. So far, the following prompt definition has been pretty strong in creating new content from old ideas.

My typical approach is to run the same prompt though multiple #LLM models

cat examples.md | fabric --model gemma2:latest -sp create_similar

Interestingly, in a couple test runs with different example files, I found that #OpenAI’s models under-performed the open source models I used via #ollama. To be clear, this is uncommon. Most of the #fabric prompts appear to be tuned for –or at least work better with– OpenAI models. With this particular prompt, this was not the case. No matter; my approach to using genAI and LLMs is to run a prompt and content through multiple inference servers, models, and even different temperatures in order to create a collection of data that I can then hand-mold into something usable[^1].

Fabric Prompt

Here is the system.md contents from the fabric “create_similar” prompt

# INPUT

INPUT:

# IDENTITY and PURPOSE

You are an advanced analytical AI adept at extracting specific points from many examples, identifying similarities and differences, synthesizing what works best, and then creating a similar but much much better version of the example.  The input contains a number of examples of the type of content needing to be synthesized.  The first section titled "Purpose" describes the nature of the examples and indidcates the topic and information to be conveyed.  Documents will be delineated with a title such as 'EXAMPLE 1' specifically calling out the beginning of a new document.  The end of each example is delineated with a footer such as 'EXAMPLE 1 END'.  Your objective is to understand the style and format of the document examples and create new similar content.

# STEPS

1. Review each document carefully, taking the time to extract and understand the primary points made in each one.
2. Compare and contrast the list of points from each document against the points made in the other documents
3. Extract the key points made by the examples, taking particular note of similarities between them.
4. Output the results as specified in the output instructions.


# OUTPUT INSTRUCTIONS

Craft and create a new document using the format and style identified from the examples.  The output must be new and novel while keeping to the themes and style of the examples.  Do not use any language from the examples verbatim.

I have found that local models, such as gemma and llama3 work a bit better by putting the INPUT field

Purpose

Up top, I mentioned that it has taken me some time to learn that repetition is the requirement for conveying a message, and a longer time to put this into regular practice. This goes 3x for our individual contributor team members. Personally, I don’t like this because it feels inefficient, but ICs don’t do it because they are very busy, and re-visiting something takes away from closing out the next item in their queue. Nonetheless, repetition is paramount, so devising low-interaction methods to revisit is crucial for regular operations.

Footnotes

[^1]: I have a feeling that I could do most of the work faster if I simply did it by hand. However, running through multiple LLMs in this manner provides a bit of randomness, and helps me remain apprised on the state-of-the-art for this strange field of study.

Creating a Michelin menu with Generative AI

We've been watching season three of The Bear and it is totally nuts, filled with drama and psychology, and human relationships, and –when they can get around to it– cooking food. This is a fantastic series, and as foodies, you can imagine that my wife and I really enjoy it. The series is intense, following a classically-trained, at the top of his game chef as he “drops down” from a fancy restaurant to a steak sandwich shop and works back toward where he was before.

That’s not a spoiler, it’s the premise. And this isn’t a review, it was just something in my mind as I experimented with this new tool called LobeChat, which is an interface to #genAI inference servers such as #OpenAI’s #chatGPT, #Claude, and nearly any other commercial or #opensource system you can imagine. I'm tweaking a #homelab hosted genAI inference server and this post is about experimenting with [Ollama]() and a Lobe chat front end for easy access. I haven’t formed a final opinion on this setup at this time, but the #LobeChat front end allows for the quick switching between inference servers and their hosted models. While not being super easy to configure, once it works, the whole thing looks very easy for someone, i.e. a roommate or a spouse, to flip between models -even in the same conversation.

So let’s look at some results. Of course, the results are about the models themselves, but the point here is that LobeChat made it easy to switch between them as I continued to develop the conversation.

First prompt

This was with ChatGPT (4o, I think). I needed something to test that an integration was working. We had just watched The Bear so this is what I came up with:

as fast as you can, what's the best restaurant dinner menu with wine pairing that could win a Michelin 3 star award?

Response

Crafting a Michelin 3-star worthy dinner menu with wine pairings involves high-quality ingredients, exquisite preparation, and a thoughtful combination of flavors. Here’s a conceptual menu that could impress:

Amuse-Bouche:Oyster with Champagne FoamWine Pairing: Champagne Brut

Appetizer:Seared Scallops with Cauliflower Puree and CaviarWine Pairing: Chablis, Premier Cru

Soup:Cream of White Asparagus with Truffle OilWine Pairing: Viognier

Fish Course:Poached Turbot with Fennel and Citrus Beurre BlancWine Pairing: Pouilly-Fumé

Main Course:Roasted Duck Breast with Plum Sauce, Parsnip Mash, and Seasonal VegetablesWine Pairing: Pinot Noir, preferably from Burgundy

Cheese Course:A Selection of Artisan Cheeses with HoneycombWine Pairing: Vintage Port

Dessert:Chocolate Lava Cake with Raspberry Coulis and Homemade Vanilla Bean Ice CreamWine Pairing: Recioto della Valpolicella

Mignardises:Petit Fours and Truffles

This menu balances rich flavors with delicate preparation, offering a progression of dishes that showcase both technique and creativity. Each wine pairing is chosen to enhance the flavors of the dishes, aiming to create a harmonious dining experience that could captivate both critics and diners alike.

Review

Not a bad first response, especially since I was just checking to see if the system worked.

Prompt 2: Get ChatGPT to refine it

That was pretty boring. I’m thinking early 90s fancy dining at best, but I wasn’t eating fancy back then and assume it was all shoulder pads, tuna steaks, and cocaine. Continuing with ChatGPT, I posed the following in an attempt to refine our menu (in retrospect, I was generous with the stars):

That's an incredibly boring menu. At best 2 stars. And that's in a region that is not known for its gastronomical excellence

Response

Creating a truly innovative and Michelin 3-star worthy menu certainly demands a more avant-garde approach, pushing culinary boundaries with unique combinations and exceptional techniques. Let's elevate the original concept to something more daring and creative:

Amuse-Bouche:Nitro-frozen Lime and Ginger Pearls with Micro BasilWine Pairing: Small-batch Sake

First Course:King Crab Tartare with Avocado Mousse, Mango Gel, and Edible FlowersWine Pairing: Albariño

Second Course:Foie Gras Au Torchon with Saffron Pear Compote and Brioche CrispWine Pairing: Sauternes

Fish Course:Butter-Poached Monkfish with Leek Ash, Sea Beans, and a Lemon Verbena EmulsionWine Pairing: White Burgundy

Intermezzo:Cucumber and Elderflower Sorbet

Main Course:Sous-vide Venison Loin with Black Garlic Purée, Wild Mushrooms, and a Juniper Berry ReductionWine Pairing: Barolo

Cheese Course:A Selection of Aged Cheeses with a Fig and Walnut TapenadeWine Pairing: Aged Madeira

Dessert:Deconstructed Passion Fruit Tart with Coconut Foam, Crystallized Ginger, and Meringue ShardsWine Pairing: Late Harvest Riesling

Petit Fours:Matcha Green Tea Macarons, Raspberry Pâte de Fruit

This revised menu introduces more complex techniques and unusual ingredients, aiming to surprise and delight with every course. Wine pairings are carefully selected to complement and enhance these adventurous flavors, providing a sophisticated and memorable dining experience.

Review

Not bad! It would be nice if the wines were more than broad regions, however, I do appreciate the ‘ñ’ in Albariño. There’s nothing particularly innovative here; the fancy-sounding “Foie Gras Au Torchon with Saffron Pear Compote and Brioche Crisp” is a Thomas Keller recipe, so at least we’re pulling from the top now that I asked it to up its game.

Prompt 3: Let's try a different model

Not only am I using a different model at this point, llama3:instruct, but I am going to specify a food critic in order to change the menu.

Prompt

One more try: let's make a menu that would knock even James Beard out of his chair because it's so good

Response

You are absolutely right! That menu, while technically sound, lacks the daring, innovation, and personal touch needed to truly impress Michelin inspectors and earn those coveted three stars. It plays it too safe, relying on classic pairings and predictable dishes.

To push towards that 3-star level, we need to think outside the box:

  • Ingredient Focus: Instead of “seasonal vegetables,” source hyper-local, unique produce with a story. Maybe a heirloom tomato variety grown in volcanic soil, or foraged mushrooms with a distinct flavor profile.
  • Deconstruction & Fusion: Take classic elements and reimagine them. A deconstructed apple pie might involve caramelized apple spheres, crumble as a powder, and a cider foam. Fusion could bring Japanese umami to the duck dish, or Indian spices to the scallops.
  • Unexpected Textures: Play with contrasts – crunchy, creamy, smooth, airy. Think smoked paprika meringue alongside the chocolate lava cake, or a black truffle “soil” under the turbot.

Wine Pairings Should Be Equally Bold: * Instead of a basic Chablis, consider an obscure producer from the Côte de Beaune known for mineral-driven wines.
* The Pinot Noir could be a lesser-known region with unique terroir, like Slovenia's Primorska. * Experiment with natural wines, or even sake pairings that complement specific dishes.

The Presentation Matters:

Plating should be artistic, telling a story through the arrangement of colors, textures, and shapes. It’s about creating an immersive experience for the diner.

A Michelin-worthy menu isn't just about taste; it's about pushing boundaries, showcasing creativity, and offering a truly unforgettable journey through flavor.

Review

This is less of a menu than some advice for the aspiring chef. Let’s try something different.

Prompt 4: Let's go for Three Stars!

I felt like this was the time to go for the gusto, let's see how a different model would respond, but also with all the history behind it.

Prompt

I asked: > One more try: let's make a menu that would knock even James Beard out of his chair because it's so good

While it's not a crazily-innovative menu, it has some classic touches of a what an American, Michelin-starred menu would look like, and

Response with gemma2

You want to blow James Beard away? This isn't about ingredients, it's about experience.

The Menu: A Symphony in Five Acts

  • Act I: Awakening – Smoked Salmon “Caviar” with Beetroot Foam, Rye Crisp, and a single drop of Ossetra Caviar.

    • Pairing: Dry Riesling from Germany's Mosel Valley, expressing its slate minerality alongside the salmon's richness.
  • Act II: Earth & Sky – Roasted Quail “Confit” with Black Truffle Jus, Heirloom Carrot “Clouds,” and a dusting of Micro Thyme.

    • Pairing: Oregon Pinot Noir, showcasing earthy notes that harmonize with the truffle while its bright acidity cuts through the richness.
  • Act III: Fire's Dance – Charred Octopus with Smoked Paprika Aioli, Caramelized Onion Jam, and a drizzle of Chili Infused Olive Oil.

    • Pairing: Spanish Albariño, crisp and mineral-driven, its citrus notes complementing the octopus while balancing the smoky paprika.
  • Act IV: Ocean's Bounty – Pan-Seared Sea Scallops with Saffron Risotto “Foam,” Dulse Crisp, and a single Pearl Oyster.

    • Pairing: Chablis Premier Cru, its intense minerality mirroring the scallops' brininess, while subtle citrus notes elevate the saffron risotto.
  • Act V: Sweet Serenade – Deconstructed Peach Cobbler with Lavender Honey Ice Cream, Toasted Almond Brittle, and a hint of Rosemary Extract.

    • Pairing: Late Harvest Gewürztraminer from Alsace, its floral aroma echoing the lavender, while its sweetness complements the peach cobbler's richness.

The Experience:

  • Each course is plated as a miniature work of art, using contrasting colors, textures, and temperatures.
  • The dining room itself is a sensory experience, with soft lighting, calming music, and subtle aromas that enhance each dish.
  • Service is impeccable, with knowledgeable staff guiding guests through the tasting menu and explaining the inspiration behind each course.

This is not just food; it's a journey for the senses, a story told through flavor and texture, leaving James Beard speechless (and perhaps, slightly envious).

Review

Wow, what a different response! This one could actually be a menu, with the wines having robust and specific descriptions. I especially like how the wines are sourced from all over the globe, really picking best of breed in the selections. As good as the wines are, however, the food does not look 3-star level to me. The dishes sound really nice, and there’s a fair bit of molecular gastronomy in there. However, aside from the poetic nature of the menu, I’m not sure this is a top-tier menu for a chef to execute.

Still, most of these items are good enough for a fake restaurant. If this was to show up in a video game or a novel, I think most people would accept it and move on.

The annual Verizon Data Breach Investigation Report is out, and along with it an accounting of the #cybersecurity compromise and data breach happenings for the past year. As always, it's filled with data analysis and some trending commentary from the previous DBIR report.

The following is a TL;DR summary generated using fabric by Daniel Miessler with some very minor editing.

SUMMARY:

This year, the 2024 DBIR reveals a significant rise in vulnerability exploitation and ransomware attacks, emphasizing the urgent need for enhanced cybersecurity measures.

  • Vulnerability exploitation tripled from last year, driven by zero-day vulnerabilities.
  • Ransomware and extortion-related breaches accounted for 32% of all breaches.
  • Human error remains a significant factor, involved in 68% of breaches.
  • Third-party breaches increased by 68%, highlighting supply chain vulnerabilities.
  • Financially motivated attacks dominate, with ransomware and extortion leading the charge.
  • The median loss from #ransomware and extortion attacks was $46,000.
  • Phishing remains a critical initial attack vector, with a notable increase in reporting rates.
  • The use of stolen credentials and exploitation of vulnerabilities are top methods for system intrusion.
  • The #MOVEit vulnerability significantly impacted the threat landscape, affecting numerous industries, continuing the trend of “secure file transfer” systems being a significant risk to both company and customer data.
  • MOVEit and remote access (e.g. VPN) compromises are the focus of successful system-compromise attacks.

STATISTICS:

  • 180% increase in attacks involving vulnerability exploitation.
  • Ransomware accounts for 23% of breaches, with pure extortion at 9%.
  • Human element involved in 68% of breaches.
  • 15% of breaches involved third-party vulnerabilities.
  • Errors contributed to 28% of breaches.
  • Financial motives behind 92% of industries targeted by ransomware.
  • Median loss from ransomware/extortion attacks is $46,000.
  • 20% of users reported phishing attempts in simulations.
  • Median time to click on a phishing email is 21 seconds.
  • Exploitation of vulnerabilities as the initial breach action doubled from last year.

QUOTES:

  • “Ransomware and extortion-related threats continue to evolve, posing significant risks across industries.”
  • “The human element remains a critical vulnerability in cybersecurity defenses.”
  • “Supply chain vulnerabilities are increasingly being exploited by attackers.”
  • “Misdelivery errors highlight the ongoing challenge of human error in data breaches.”
  • “Financially motivated attacks dominate the cyber threat landscape.”
  • “The MOVEit vulnerability has had a profound impact on the cybersecurity threat landscape.”
  • “Increased reporting rates for phishing attempts indicate growing awareness among users.”
  • “The rapid response to phishing emails underscores the need for continuous user education.”
  • “Stolen credentials and vulnerability exploitation remain preferred methods for attackers.”
  • “The rise in third-party breaches underscores the importance of vendor security assessments.”

RECOMMENDATIONS:

  • Implement multi-factor authentication to mitigate the risk of stolen credentials.
  • Regularly update and patch systems to protect against vulnerability exploitation.
  • Conduct continuous phishing awareness training for all employees.
  • Perform thorough security assessments of third-party vendors and suppliers.
  • Deploy endpoint detection and response solutions to identify and mitigate ransomware attacks.
  • Encourage the reporting of phishing attempts and provide clear reporting procedures.
  • Utilize web application firewalls to protect against basic web application attacks.
  • Establish robust data backup and recovery processes to minimize the impact of ransomware.
  • Monitor for unusual activity indicating the misuse of privileges by internal actors.

I always loved Lesley Carhart's blog post on packing for hacker conferences and referred to it many times while prepping for #DEFCON , #BSides, other cons, and even general travel. As time has gone by, I've developed a three-tier system that kind of builds on itself for longer and more involved travel. The general ideaidea is that

Tier 1 Go Bag – The Weekender

The most basic level of the tech travel stack I've created is what I call “The Weekender”. it's meant for being out and about all day long or for short weekend getaways. As such, the requirements are basically: 1. Take up little room, being able to fit in any backpack or even a sling bag. 2. be able to charge the devices I'm likely to carry, from ear buds to a laptop. 3. Plan for extended periods away from a power source.

image image 1: Tier 1 go bag – The Weekender with a backup battery, USB-C to USB-C cable, USB-A to micro-USB cable, and USB-C adapter. Small, ready to go, and easy to drop into any bag.

Bag Contents

In order to address these simple requirements, I realized I needed to be able to provide power to USB-C and micro-USB devices, for a laptop, I need a bit more oomph, so the adapter can deliver enough power to charge a laptop battery. Limited by the space requirements, I went with a 33W charger that can absolutely charge a laptop, but it will not keep up with power consumption under load. This means that if I'm going to be working all day on the laptop, I'm going to need to move up to the next tier.

Power sources & adapters

  • 1x multi-adapter (USB-A for devices, USB-C for laptops) like the Anker 323 at 33W it won’t fully power a laptop, however, it will greatly extend the battery life and will change the laptop when it’s off or in standby
  • 1 5000mAh battery pack with dual USB-C ports – thin and light is key here

USB

Cables

Note that all cables can transfer data. For versatility, I don’t mess with power-only cables. – 1x USB-A to microUSB cable – 3ft. – 1x 5ft. USB-C to USB-C cable – This is the minimum length you want to ensure your phone can reach the bed when charging

Converters

Converters extend the utility and versatility of the other equipment – USB micro female to USB-C male. This gives me a third USB-C cable

image Image 2: Zipped Weekender Go-bag and its contents in detail

I came across a post the other day by JWZ regarding the #billionaire behind one of the more infamous #hacking-for-hire companies around. It's been fairly well known for some time that people and companies were selling their offensive #cybersecurity skills like this, but what was news to me was that Rajat Khare was paying and bullying his way to wiping the slate clean by removing the articles regarding his involvement.

So, given that large language models are trained on data from a certain point in time, I thought it would be interesting to see what one “knew” about Rajat Khare’s activities. I used Mistral Instruct 7B and asked it to create an article regarding Khare’s involvement with funding hack-for-hire companies. The output seemed very good and accurate; I was able to quickly confirm nearly everything in the output below.

I also wanted the reporter’s voice to be hard-hitting, 50’s style journalism, which I don’t think came out very well. It did, however, lead me down a rabbit hole into Russian Oligarch Rybolovlev by linking him to some hacking-for-hire as a customer.

Mistral-Instruct 7B generated article

In the shadows of the digital world, there exists an insidious network of hackers who have been orchestrating a campaign of cybercrime that has left entire nations vulnerable. These hackers operate in secret, using sophisticated techniques to breach even the most secure systems and steal sensitive data from politicians, celebrities, and other high-profile figures around the world.

At the heart of this criminal enterprise is Rajat Khare, an Indian venture capitalist who has been funding firms that eventually became a widespread hack-for-hire enterprise known as Appin. Khare's involvement in these illicit activities has left him with a dark reputation and raised serious questions about his business practices.

Appin was founded by a group of former employees from the Indian cybersecurity firm, Hacking House. These individuals were highly skilled hackers who used their expertise to provide a range of services to clients around the world. However, as time went on, it became clear that Appin's true purpose was to carry out a campaign of cybercrime for hire.

One of the most notorious cases involving Appin occurred in 2018 when they were hired by a wealthy businessman named Dmitry Rybolovlev to hack into the email accounts of his former wife, Elena Okamoto. Using advanced techniques such as phishing and social engineering, Appin was able to gain access to Ms. Okamoto's emails and steal sensitive information about her personal life and financial dealings.

Another high-profile case involved a group of hackers who were hired by the Russian government to carry out cyberattacks against Western governments and political figures during the 2016 U.S. presidential election. It is believed that Appin played a key role in this operation, providing the hackers with access to sensitive information about their targets.

Despite these high-profile cases, Rajat Khare has continued to deny any involvement in Appin's criminal activities. However, evidence suggests otherwise. In 2019, it was revealed that Khare had invested heavily in a company called Cybereason, which provides advanced threat detection and response services. This investment raised suspicions that Khare may have been using his venture capital firm to fund cybercrime operations.

Furthermore, several former employees of Appin have come forward to describe the company's culture of criminality. They claim that the company was run by a group of ruthless hackers who were willing to do whatever it took to carry out their clients' requests, even if it meant breaking the law.

In conclusion, Rajat Khare's involvement in Appin and its criminal activities raises serious questions about his business practices and ethics. It is clear that he has been using his venture capital firm to fund a campaign of cybercrime that has left entire nations vulnerable. As such, it is important for law enforcement agencies around the world to take action against Khare and his associates in order to protect citizens from further harm.

References

Some real-world references: – Inside the global hack-for-hire industry Bureau of Investigative Journalism – Reuters article archived on Archive.org: How an Indian Startup Hacked the WorldWho Is Killing All These Stories About a Controversial Tech Mogul? from the DailyBeast – Swiss Prosecutors Investigating Russian Billionaire RybolovlevHow mercenary hackers sway litigation battles – Reuters Investigative report

edit: Jan 4, 2023: added link to the Reuters “Mercenary hackers” article