Kevin Neely's Security Notes

A place where I can post security-related long-form thoughts, notes, and articles

Automatically creating new content from old

You know what I hate? Repetition. Still, a difficult lesson that I’ve leaned, through both experience and discussions with experts in marketing, is that repetition is the key to getting one’s message across. I like to believe that a single, coherent and cogent conveyance of thought is all it takes to deliver, for example, a new security model or change in policy, or whatever. In reality, the propagator of the message –in this case, the security team– has to deliver the message, announce it a second time, send an email, post a document on the intranet, have more meetings with the same people, make sure to cover the topic on various 1:1 meetings, etc. etc. ad infinitum.

And that is why I have been working with Daniel Miessler’s fabric framework to devise a prompt that can take variations of the same messaging and create new, yet fresh, versions of the same message. So far, the following prompt definition has been pretty strong in creating new content from old ideas.

My typical approach is to run the same prompt though multiple #LLM models

cat examples.md | fabric --model gemma2:latest -sp create_similar

Interestingly, in a couple test runs with different example files, I found that #OpenAI’s models under-performed the open source models I used via #ollama. To be clear, this is uncommon. Most of the #fabric prompts appear to be tuned for –or at least work better with– OpenAI models. With this particular prompt, this was not the case. No matter; my approach to using genAI and LLMs is to run a prompt and content through multiple inference servers, models, and even different temperatures in order to create a collection of data that I can then hand-mold into something usable[^1].

Fabric Prompt

Here is the system.md contents from the fabric “create_similar” prompt

# INPUT

INPUT:

# IDENTITY and PURPOSE

You are an advanced analytical AI adept at extracting specific points from many examples, identifying similarities and differences, synthesizing what works best, and then creating a similar but much much better version of the example.  The input contains a number of examples of the type of content needing to be synthesized.  The first section titled "Purpose" describes the nature of the examples and indidcates the topic and information to be conveyed.  Documents will be delineated with a title such as 'EXAMPLE 1' specifically calling out the beginning of a new document.  The end of each example is delineated with a footer such as 'EXAMPLE 1 END'.  Your objective is to understand the style and format of the document examples and create new similar content.

# STEPS

1. Review each document carefully, taking the time to extract and understand the primary points made in each one.
2. Compare and contrast the list of points from each document against the points made in the other documents
3. Extract the key points made by the examples, taking particular note of similarities between them.
4. Output the results as specified in the output instructions.


# OUTPUT INSTRUCTIONS

Craft and create a new document using the format and style identified from the examples.  The output must be new and novel while keeping to the themes and style of the examples.  Do not use any language from the examples verbatim.

I have found that local models, such as gemma and llama3 work a bit better by putting the INPUT field

Purpose

Up top, I mentioned that it has taken me some time to learn that repetition is the requirement for conveying a message, and a longer time to put this into regular practice. This goes 3x for our individual contributor team members. Personally, I don’t like this because it feels inefficient, but ICs don’t do it because they are very busy, and re-visiting something takes away from closing out the next item in their queue. Nonetheless, repetition is paramount, so devising low-interaction methods to revisit is crucial for regular operations.

Footnotes

[^1]: I have a feeling that I could do most of the work faster if I simply did it by hand. However, running through multiple LLMs in this manner provides a bit of randomness, and helps me remain apprised on the state-of-the-art for this strange field of study.

Creating a Michelin menu with Generative AI

We've been watching season three of The Bear and it is totally nuts, filled with drama and psychology, and human relationships, and –when they can get around to it– cooking food. This is a fantastic series, and as foodies, you can imagine that my wife and I really enjoy it. The series is intense, following a classically-trained, at the top of his game chef as he “drops down” from a fancy restaurant to a steak sandwich shop and works back toward where he was before.

That’s not a spoiler, it’s the premise. And this isn’t a review, it was just something in my mind as I experimented with this new tool called LobeChat, which is an interface to #genAI inference servers such as #OpenAI’s #chatGPT, #Claude, and nearly any other commercial or #opensource system you can imagine. I'm tweaking a #homelab hosted genAI inference server and this post is about experimenting with [Ollama]() and a Lobe chat front end for easy access. I haven’t formed a final opinion on this setup at this time, but the #LobeChat front end allows for the quick switching between inference servers and their hosted models. While not being super easy to configure, once it works, the whole thing looks very easy for someone, i.e. a roommate or a spouse, to flip between models -even in the same conversation.

So let’s look at some results. Of course, the results are about the models themselves, but the point here is that LobeChat made it easy to switch between them as I continued to develop the conversation.

First prompt

This was with ChatGPT (4o, I think). I needed something to test that an integration was working. We had just watched The Bear so this is what I came up with:

as fast as you can, what's the best restaurant dinner menu with wine pairing that could win a Michelin 3 star award?

Response

Crafting a Michelin 3-star worthy dinner menu with wine pairings involves high-quality ingredients, exquisite preparation, and a thoughtful combination of flavors. Here’s a conceptual menu that could impress:

Amuse-Bouche:Oyster with Champagne FoamWine Pairing: Champagne Brut

Appetizer:Seared Scallops with Cauliflower Puree and CaviarWine Pairing: Chablis, Premier Cru

Soup:Cream of White Asparagus with Truffle OilWine Pairing: Viognier

Fish Course:Poached Turbot with Fennel and Citrus Beurre BlancWine Pairing: Pouilly-Fumé

Main Course:Roasted Duck Breast with Plum Sauce, Parsnip Mash, and Seasonal VegetablesWine Pairing: Pinot Noir, preferably from Burgundy

Cheese Course:A Selection of Artisan Cheeses with HoneycombWine Pairing: Vintage Port

Dessert:Chocolate Lava Cake with Raspberry Coulis and Homemade Vanilla Bean Ice CreamWine Pairing: Recioto della Valpolicella

Mignardises:Petit Fours and Truffles

This menu balances rich flavors with delicate preparation, offering a progression of dishes that showcase both technique and creativity. Each wine pairing is chosen to enhance the flavors of the dishes, aiming to create a harmonious dining experience that could captivate both critics and diners alike.

Review

Not a bad first response, especially since I was just checking to see if the system worked.

Prompt 2: Get ChatGPT to refine it

That was pretty boring. I’m thinking early 90s fancy dining at best, but I wasn’t eating fancy back then and assume it was all shoulder pads, tuna steaks, and cocaine. Continuing with ChatGPT, I posed the following in an attempt to refine our menu (in retrospect, I was generous with the stars):

That's an incredibly boring menu. At best 2 stars. And that's in a region that is not known for its gastronomical excellence

Response

Creating a truly innovative and Michelin 3-star worthy menu certainly demands a more avant-garde approach, pushing culinary boundaries with unique combinations and exceptional techniques. Let's elevate the original concept to something more daring and creative:

Amuse-Bouche:Nitro-frozen Lime and Ginger Pearls with Micro BasilWine Pairing: Small-batch Sake

First Course:King Crab Tartare with Avocado Mousse, Mango Gel, and Edible FlowersWine Pairing: Albariño

Second Course:Foie Gras Au Torchon with Saffron Pear Compote and Brioche CrispWine Pairing: Sauternes

Fish Course:Butter-Poached Monkfish with Leek Ash, Sea Beans, and a Lemon Verbena EmulsionWine Pairing: White Burgundy

Intermezzo:Cucumber and Elderflower Sorbet

Main Course:Sous-vide Venison Loin with Black Garlic Purée, Wild Mushrooms, and a Juniper Berry ReductionWine Pairing: Barolo

Cheese Course:A Selection of Aged Cheeses with a Fig and Walnut TapenadeWine Pairing: Aged Madeira

Dessert:Deconstructed Passion Fruit Tart with Coconut Foam, Crystallized Ginger, and Meringue ShardsWine Pairing: Late Harvest Riesling

Petit Fours:Matcha Green Tea Macarons, Raspberry Pâte de Fruit

This revised menu introduces more complex techniques and unusual ingredients, aiming to surprise and delight with every course. Wine pairings are carefully selected to complement and enhance these adventurous flavors, providing a sophisticated and memorable dining experience.

Review

Not bad! It would be nice if the wines were more than broad regions, however, I do appreciate the ‘ñ’ in Albariño. There’s nothing particularly innovative here; the fancy-sounding “Foie Gras Au Torchon with Saffron Pear Compote and Brioche Crisp” is a Thomas Keller recipe, so at least we’re pulling from the top now that I asked it to up its game.

Prompt 3: Let's try a different model

Not only am I using a different model at this point, llama3:instruct, but I am going to specify a food critic in order to change the menu.

Prompt

One more try: let's make a menu that would knock even James Beard out of his chair because it's so good

Response

You are absolutely right! That menu, while technically sound, lacks the daring, innovation, and personal touch needed to truly impress Michelin inspectors and earn those coveted three stars. It plays it too safe, relying on classic pairings and predictable dishes.

To push towards that 3-star level, we need to think outside the box:

  • Ingredient Focus: Instead of “seasonal vegetables,” source hyper-local, unique produce with a story. Maybe a heirloom tomato variety grown in volcanic soil, or foraged mushrooms with a distinct flavor profile.
  • Deconstruction & Fusion: Take classic elements and reimagine them. A deconstructed apple pie might involve caramelized apple spheres, crumble as a powder, and a cider foam. Fusion could bring Japanese umami to the duck dish, or Indian spices to the scallops.
  • Unexpected Textures: Play with contrasts – crunchy, creamy, smooth, airy. Think smoked paprika meringue alongside the chocolate lava cake, or a black truffle “soil” under the turbot.

Wine Pairings Should Be Equally Bold: * Instead of a basic Chablis, consider an obscure producer from the Côte de Beaune known for mineral-driven wines.
* The Pinot Noir could be a lesser-known region with unique terroir, like Slovenia's Primorska. * Experiment with natural wines, or even sake pairings that complement specific dishes.

The Presentation Matters:

Plating should be artistic, telling a story through the arrangement of colors, textures, and shapes. It’s about creating an immersive experience for the diner.

A Michelin-worthy menu isn't just about taste; it's about pushing boundaries, showcasing creativity, and offering a truly unforgettable journey through flavor.

Review

This is less of a menu than some advice for the aspiring chef. Let’s try something different.

Prompt 4: Let's go for Three Stars!

I felt like this was the time to go for the gusto, let's see how a different model would respond, but also with all the history behind it.

Prompt

I asked: > One more try: let's make a menu that would knock even James Beard out of his chair because it's so good

While it's not a crazily-innovative menu, it has some classic touches of a what an American, Michelin-starred menu would look like, and

Response with gemma2

You want to blow James Beard away? This isn't about ingredients, it's about experience.

The Menu: A Symphony in Five Acts

  • Act I: Awakening – Smoked Salmon “Caviar” with Beetroot Foam, Rye Crisp, and a single drop of Ossetra Caviar.

    • Pairing: Dry Riesling from Germany's Mosel Valley, expressing its slate minerality alongside the salmon's richness.
  • Act II: Earth & Sky – Roasted Quail “Confit” with Black Truffle Jus, Heirloom Carrot “Clouds,” and a dusting of Micro Thyme.

    • Pairing: Oregon Pinot Noir, showcasing earthy notes that harmonize with the truffle while its bright acidity cuts through the richness.
  • Act III: Fire's Dance – Charred Octopus with Smoked Paprika Aioli, Caramelized Onion Jam, and a drizzle of Chili Infused Olive Oil.

    • Pairing: Spanish Albariño, crisp and mineral-driven, its citrus notes complementing the octopus while balancing the smoky paprika.
  • Act IV: Ocean's Bounty – Pan-Seared Sea Scallops with Saffron Risotto “Foam,” Dulse Crisp, and a single Pearl Oyster.

    • Pairing: Chablis Premier Cru, its intense minerality mirroring the scallops' brininess, while subtle citrus notes elevate the saffron risotto.
  • Act V: Sweet Serenade – Deconstructed Peach Cobbler with Lavender Honey Ice Cream, Toasted Almond Brittle, and a hint of Rosemary Extract.

    • Pairing: Late Harvest Gewürztraminer from Alsace, its floral aroma echoing the lavender, while its sweetness complements the peach cobbler's richness.

The Experience:

  • Each course is plated as a miniature work of art, using contrasting colors, textures, and temperatures.
  • The dining room itself is a sensory experience, with soft lighting, calming music, and subtle aromas that enhance each dish.
  • Service is impeccable, with knowledgeable staff guiding guests through the tasting menu and explaining the inspiration behind each course.

This is not just food; it's a journey for the senses, a story told through flavor and texture, leaving James Beard speechless (and perhaps, slightly envious).

Review

Wow, what a different response! This one could actually be a menu, with the wines having robust and specific descriptions. I especially like how the wines are sourced from all over the globe, really picking best of breed in the selections. As good as the wines are, however, the food does not look 3-star level to me. The dishes sound really nice, and there’s a fair bit of molecular gastronomy in there. However, aside from the poetic nature of the menu, I’m not sure this is a top-tier menu for a chef to execute.

Still, most of these items are good enough for a fake restaurant. If this was to show up in a video game or a novel, I think most people would accept it and move on.

The annual Verizon Data Breach Investigation Report is out, and along with it an accounting of the #cybersecurity compromise and data breach happenings for the past year. As always, it's filled with data analysis and some trending commentary from the previous DBIR report.

The following is a TL;DR summary generated using fabric by Daniel Miessler with some very minor editing.

SUMMARY:

This year, the 2024 DBIR reveals a significant rise in vulnerability exploitation and ransomware attacks, emphasizing the urgent need for enhanced cybersecurity measures.

  • Vulnerability exploitation tripled from last year, driven by zero-day vulnerabilities.
  • Ransomware and extortion-related breaches accounted for 32% of all breaches.
  • Human error remains a significant factor, involved in 68% of breaches.
  • Third-party breaches increased by 68%, highlighting supply chain vulnerabilities.
  • Financially motivated attacks dominate, with ransomware and extortion leading the charge.
  • The median loss from #ransomware and extortion attacks was $46,000.
  • Phishing remains a critical initial attack vector, with a notable increase in reporting rates.
  • The use of stolen credentials and exploitation of vulnerabilities are top methods for system intrusion.
  • The #MOVEit vulnerability significantly impacted the threat landscape, affecting numerous industries, continuing the trend of “secure file transfer” systems being a significant risk to both company and customer data.
  • MOVEit and remote access (e.g. VPN) compromises are the focus of successful system-compromise attacks.

STATISTICS:

  • 180% increase in attacks involving vulnerability exploitation.
  • Ransomware accounts for 23% of breaches, with pure extortion at 9%.
  • Human element involved in 68% of breaches.
  • 15% of breaches involved third-party vulnerabilities.
  • Errors contributed to 28% of breaches.
  • Financial motives behind 92% of industries targeted by ransomware.
  • Median loss from ransomware/extortion attacks is $46,000.
  • 20% of users reported phishing attempts in simulations.
  • Median time to click on a phishing email is 21 seconds.
  • Exploitation of vulnerabilities as the initial breach action doubled from last year.

QUOTES:

  • “Ransomware and extortion-related threats continue to evolve, posing significant risks across industries.”
  • “The human element remains a critical vulnerability in cybersecurity defenses.”
  • “Supply chain vulnerabilities are increasingly being exploited by attackers.”
  • “Misdelivery errors highlight the ongoing challenge of human error in data breaches.”
  • “Financially motivated attacks dominate the cyber threat landscape.”
  • “The MOVEit vulnerability has had a profound impact on the cybersecurity threat landscape.”
  • “Increased reporting rates for phishing attempts indicate growing awareness among users.”
  • “The rapid response to phishing emails underscores the need for continuous user education.”
  • “Stolen credentials and vulnerability exploitation remain preferred methods for attackers.”
  • “The rise in third-party breaches underscores the importance of vendor security assessments.”

RECOMMENDATIONS:

  • Implement multi-factor authentication to mitigate the risk of stolen credentials.
  • Regularly update and patch systems to protect against vulnerability exploitation.
  • Conduct continuous phishing awareness training for all employees.
  • Perform thorough security assessments of third-party vendors and suppliers.
  • Deploy endpoint detection and response solutions to identify and mitigate ransomware attacks.
  • Encourage the reporting of phishing attempts and provide clear reporting procedures.
  • Utilize web application firewalls to protect against basic web application attacks.
  • Establish robust data backup and recovery processes to minimize the impact of ransomware.
  • Monitor for unusual activity indicating the misuse of privileges by internal actors.

I always loved Lesley Carhart's blog post on packing for hacker conferences and referred to it many times while prepping for #DEFCON , #BSides, other cons, and even general travel. As time has gone by, I've developed a three-tier system that kind of builds on itself for longer and more involved travel. The general ideaidea is that

Tier 1 Go Bag – The Weekender

The most basic level of the tech travel stack I've created is what I call “The Weekender”. it's meant for being out and about all day long or for short weekend getaways. As such, the requirements are basically: 1. Take up little room, being able to fit in any backpack or even a sling bag. 2. be able to charge the devices I'm likely to carry, from ear buds to a laptop. 3. Plan for extended periods away from a power source.

image image 1: Tier 1 go bag – The Weekender with a backup battery, USB-C to USB-C cable, USB-A to micro-USB cable, and USB-C adapter. Small, ready to go, and easy to drop into any bag.

Bag Contents

In order to address these simple requirements, I realized I needed to be able to provide power to USB-C and micro-USB devices, for a laptop, I need a bit more oomph, so the adapter can deliver enough power to charge a laptop battery. Limited by the space requirements, I went with a 33W charger that can absolutely charge a laptop, but it will not keep up with power consumption under load. This means that if I'm going to be working all day on the laptop, I'm going to need to move up to the next tier.

Power sources & adapters

  • 1x multi-adapter (USB-A for devices, USB-C for laptops) like the Anker 323 at 33W it won’t fully power a laptop, however, it will greatly extend the battery life and will change the laptop when it’s off or in standby
  • 1 5000mAh battery pack with dual USB-C ports – thin and light is key here

USB

Cables

Note that all cables can transfer data. For versatility, I don’t mess with power-only cables. – 1x USB-A to microUSB cable – 3ft. – 1x 5ft. USB-C to USB-C cable – This is the minimum length you want to ensure your phone can reach the bed when charging

Converters

Converters extend the utility and versatility of the other equipment – USB micro female to USB-C male. This gives me a third USB-C cable

image Image 2: Zipped Weekender Go-bag and its contents in detail

I came across a post the other day by JWZ regarding the #billionaire behind one of the more infamous #hacking-for-hire companies around. It's been fairly well known for some time that people and companies were selling their offensive #cybersecurity skills like this, but what was news to me was that Rajat Khare was paying and bullying his way to wiping the slate clean by removing the articles regarding his involvement.

So, given that large language models are trained on data from a certain point in time, I thought it would be interesting to see what one “knew” about Rajat Khare’s activities. I used Mistral Instruct 7B and asked it to create an article regarding Khare’s involvement with funding hack-for-hire companies. The output seemed very good and accurate; I was able to quickly confirm nearly everything in the output below.

I also wanted the reporter’s voice to be hard-hitting, 50’s style journalism, which I don’t think came out very well. It did, however, lead me down a rabbit hole into Russian Oligarch Rybolovlev by linking him to some hacking-for-hire as a customer.

Mistral-Instruct 7B generated article

In the shadows of the digital world, there exists an insidious network of hackers who have been orchestrating a campaign of cybercrime that has left entire nations vulnerable. These hackers operate in secret, using sophisticated techniques to breach even the most secure systems and steal sensitive data from politicians, celebrities, and other high-profile figures around the world.

At the heart of this criminal enterprise is Rajat Khare, an Indian venture capitalist who has been funding firms that eventually became a widespread hack-for-hire enterprise known as Appin. Khare's involvement in these illicit activities has left him with a dark reputation and raised serious questions about his business practices.

Appin was founded by a group of former employees from the Indian cybersecurity firm, Hacking House. These individuals were highly skilled hackers who used their expertise to provide a range of services to clients around the world. However, as time went on, it became clear that Appin's true purpose was to carry out a campaign of cybercrime for hire.

One of the most notorious cases involving Appin occurred in 2018 when they were hired by a wealthy businessman named Dmitry Rybolovlev to hack into the email accounts of his former wife, Elena Okamoto. Using advanced techniques such as phishing and social engineering, Appin was able to gain access to Ms. Okamoto's emails and steal sensitive information about her personal life and financial dealings.

Another high-profile case involved a group of hackers who were hired by the Russian government to carry out cyberattacks against Western governments and political figures during the 2016 U.S. presidential election. It is believed that Appin played a key role in this operation, providing the hackers with access to sensitive information about their targets.

Despite these high-profile cases, Rajat Khare has continued to deny any involvement in Appin's criminal activities. However, evidence suggests otherwise. In 2019, it was revealed that Khare had invested heavily in a company called Cybereason, which provides advanced threat detection and response services. This investment raised suspicions that Khare may have been using his venture capital firm to fund cybercrime operations.

Furthermore, several former employees of Appin have come forward to describe the company's culture of criminality. They claim that the company was run by a group of ruthless hackers who were willing to do whatever it took to carry out their clients' requests, even if it meant breaking the law.

In conclusion, Rajat Khare's involvement in Appin and its criminal activities raises serious questions about his business practices and ethics. It is clear that he has been using his venture capital firm to fund a campaign of cybercrime that has left entire nations vulnerable. As such, it is important for law enforcement agencies around the world to take action against Khare and his associates in order to protect citizens from further harm.

References

Some real-world references: – Inside the global hack-for-hire industry Bureau of Investigative Journalism – Reuters article archived on Archive.org: How an Indian Startup Hacked the WorldWho Is Killing All These Stories About a Controversial Tech Mogul? from the DailyBeast – Swiss Prosecutors Investigating Russian Billionaire RybolovlevHow mercenary hackers sway litigation battles – Reuters Investigative report

edit: Jan 4, 2023: added link to the Reuters “Mercenary hackers” article

Finishing the POC

I ended part one with a working configuration, I proceeded to test some scenarios to make sure things were working correctly. For the most part, I connected the keyboard and mouse to the NUC through an old USB hub and directly connected the Ethernet cable to the NUC. The Thunderbolt cable connects the NUC to the #eGPU and the monitor is connected directly to the Nvidia card. As my testing progressed over the next couple of days, I was mostly happy with the results, but I also ran into a number of quirks and one really unlikely hardware malfunction.

My (Eventual) Build

I paired the Razer Core X Chrome with a recent (late 2020) Intel NUC I chose this one because they had great compattibility results in most of the articles I was reading, and I liked the USB hub and ethernet portt in the back of the device. In retrospect, I probably would have been better off with the plain Razer Core X. The hub’s functionality has been flaky in my experience, sometimes working with the Ubuntu NUC,

“image

Testing and Troubleshooting

NUC & eGPU experience

I encountered a few quirks during the proof-of-concept phase. For whatever reason, the NUC does not fully recognize the eGPU on every startup. This manifests in two ways described below

No login screen after boot

Sometimes, the display would update only as far as the startup messages, ending with checking the volume inodes. Moving the HDMI cable from the eGPU enclosure to the NUC itself showed the login screen once, but most often, it was impossible to login locally. I could either SSH in and reboot it or just long-press the power and turn back on.

System not actually using the GPU

The most common problem, and similar to the above, sometimes the system would startup, but immediately upon using the GUI, I could feel that something wasn’t right. The password echo dots took just a few parts of a second too long to appear. Dragging windows around was a choppy experience. Even though running nvidia-smi showed the card as recognized, it was clear that it was not being used as no processes had been assigned VRAM.

Every single time, a reboot fixes this issue. The way to prevent it is to not turn off the eGPU enclosure at night and leave it connected. However, the enclosure’s fan will run continuously and the lighting glows if one does this and that seems like a waste.

NUC power switch

Completely unrelated to the actual configuration and testing, but impacting the POC nonetheless was the power switch on my NUC going out. After a frustrating day of testing, I set everything aside -even unplugging all the cables- to give myself a break. Coming back the next morning with some fresh ideas, the NUC wouldn’t power on. I could see that it was receiving power (there’s an internal LED), but pressing the button did nothing.

Apparently, this is a common problem with NUC devices. Luckily, I was well within my three year warranty, so I opened a support ticket. I had found the likely fix online: resetting the CMOS, however, I could not unscrew the screws affixing the motherboard to the chasis. So, I opened a ticket, the agent said the same thing, but eventually I was able to RMA it. I have to say, Intel’s support team was great here.

Windows & eGPU experience

I haven’t yet upgraded my laptop or any other Windows system to something with Thunderbolt 3, so this is planned for the future.

Day to day experience

So far, the games have worked okay. That is to say that on an older NUC with a lower CPU (it’s purpose was mostly to have decent memory for many low-impact #Docker containers, and sufficient high-speed storage for database seeks) has been pretty good. This is my “workhose” system: something I expect to use for long-running, non-interactive jobs, and it seems to do them excellently. When the system

Games

Just as I started in on this project, I was also listening to the [Linux Downtime](https://linuxdowntime.com podcast, and as luck would have it, they had an episode with Liam, the author of the Gaming on Linux website, They introduced me to Proton, a Steam project to bring mainstream games to #Linux, requiring just a right-click setting to enable on Linux what would otherwise be a Windows-only game. This opened up a realm of possibilities, and also delayed the testing as I played through thoroughly tested the capabilities of the NUC + eGPU combination.

Running games under Ubuntu with the eGPU connected seemed to work well. The window on the enclosure is nice, as you can visibly see when the system offloads to the GPU

Intel NUC with Razer Core X Chroma

Deployment

Before and After

GTX 10660 and RTX 3090

Considerations

With the newest GPUs, space is a consideration. Some of the most powerful are three PCIe slots wide, and the Chroma enclosure only handles 2 slots (officially). That was one of the drivers for a 3090, though I have seen people doing a bit of machining in order to fit the three slot into the rather large enclosure.

Benchmarking

I didn’t take down

Hashcat

Intel NUC i5 + GTX 1060: – example0: 733.3 MH/s, 55s – example400: 2741.0 kH/s – example500: 63348 H/s

Intel NUC i5 + RTX 3090 – example0: 10460.7 MH/s – example400: 10491.0 kH/s – example500: 73839 H/s

Portal 2

Portal 2 runs natively under Linux, so this was an easy test

Intel NUC i5 – no GPU – choppy – barely playable

Intel NUC i5 + GTX 1060: – smooth – playable

Guardians of the Galaxy game

Intel NUC i5 – no GPU – choppy – barely playable

Intel NUC i5 + GTX 1060: – lower framerate while big action is taking place – mostly playable

Intel NUC i5 + RTX 3090 – smooth and with higher settings than the GTX10060 – playable, but CPU runs between 65-85% during gameplay, so an upgrade here would help

Before and After

One of the primary objectives was to lower both the size and power footprint while upgrading the overall capacity and computing experience. As the next couple of pictures show, the first objective was achieved. The second objective remains a bit more elusive. When the system works, it works, but I am still trying to figure out what causes me to land in a reboot cycle where I need to power on and then reboot 2-3 times before the OS fully recognizes the GPU and peripherals connected to the USB hub.

A plain black tower PC.  Red glowing fans in front. Image: Full tower on top of network stack

And here comes the i7 NUC on top of the eGPU enclosure. NUCs are a study in economy of space, but the following picture really showcases how a full-fledged, highly-performant computer is still about 1/30 the size of the one it replaced (ignoring the GPU, of course).

Intel NUC on top of Razer Core X Chroma eGPU Image: Just the enclosure with a NUC on top.

Moving to the enclosure with the NUC as primary compute, pictured above, Weird that the camera picks up so much debris on the top of the enclosure; it’s actually very new and there is very little dust on top.

Thoughts

While the vertical space savings are significant, I’d say that if space economy is your primary objective, the eGPU only aids in vertical space savings; the enclosure has about the same 2D footprint as my former tower. But, if you count the space (& weight!!) savings in the laptop, it becomes significant. I can comfortably travel with a light, sub-3 lb. (<1.5 kg) laptop in my backpack for hours on end. This is important for conferences like DEFCON or FIRST where you are going to be away from the hotel pretty much all day and want the laptop with you, even though it will be unused in the backpack for much of the time.

After this fairly expensive experiment (which I’m stuck with for a few years), I’m not sure if I’d do it again. There are definitely benefits as I’ve outlined above, and it has mostly met my expectations. The NUC is due for an upgrade in a year, and it’s difficult for me to imagine needing much more in the way of GPU for a while[^1] so I think that will make upgrades easier to navigate. It’s much simpler to look at the small form-factor PCs available and choose from that rather than evaluating all the available parts from a myriad of vendors and navigating their (in)compatibilities.

What it really comes down to is the plug-and-playability. If I could treat this thing like a docking station with a massive GPU, I would be all-in. As it stands, the setup is more of a trial-and-error PnP system where I’m never sure what I’m going to get out of it. To that end, lack of hot plug support is unfortunate. I’d really like to be able to move the eGPU from the NUC to a laptop without needing to shutdown the system.

Since Ubuntu 22.04 and especially 23.04 worked much better than 20.04, I am hopeful for improvements in the ease-of-use.

At a minimum, I would wait for a Thunderbolt 4-capable GPU enclosure. When I started this project, all of the proven enclosures were Thunderbolt 3, manufacturers don’t release these things very often, and I was pretty impatient to give my theory a try. (Also, I’d promised an old, yet capable, system to a friend’s son. I wasn’t about to go back on that deal!). If you try something similar let me know!

[^1]: Unless I decide to go really nuts on traininig language models and other tasks and need to start using some of the Nvidia Tesla gear in my system.

This is a log of experiences and experimentation in moving from more traditional home computing –ATX cases, components, water cooling, and continual upgrades– to something a bit more modular in terms or GPU computing power. This guide probably isn’t for most people. It’s a collection of notes I took during the process, strung together in case they might help someone also looking to pack multiple power-use-cases into as small a format as possible.

[Note:] A later evolution should involve a similar down-sizing of a home storage appliance.

Objectives

An external GPU requires more setup, and -let’s face it- fiddling than getting a gaming laptop or a full PC case that can handle multi-PCIe slot GPUs. So why do it? A couple objectives had been bouncing around in my head that led me to this: – I need a system that can run compute-intensive and GPU-intensive tasks for long periods of time, e.g. machine learning, and training large language models – I need a light laptop for travel (i.e. I don’t want to carry around a 5+lb./2.5 kilo gaming laptop) – I want to be able to play recent games, but don’t need to be on the cutting edge of gaming – I want to reduce the overall space footprint for my computing devices.

In summary, I want my systems to be able to handle the more intensive tasks I plan to throw at them: Windows laptop for gaming and also travel, the stay–at-home system can perform long-running tasks such as AI model training, password cracking, and daily cron jobs.

Things I don’t care about: – being able to play games while traveling – document data diverging due to on multiple systems: I use a personal #NextCloud instance to keep my documents in sync.

Current State

I have a number of personal computing devices in my home lab for testing things and running different tasks, but they’re all aging a bit, so it is time to upgrade: – my Razer Blade 13 laptop is from 2016 – my main tower/gaming PC is from 2015 with an Nvidia GTX 1060 – an i5 NUC from 2020 (unused) – an i3 NUC from 2013 (unused) – A 6TB NAS with 4 aging 2TB drives from 2014 – Raspberry Pis and some other non-relevant computing devices

Configurations

With the objectives in mind, and realizing that my workload system would almost certainly run Linux, the two configurations for experimentation were: – Intel NUC with an eGPU – Lightweight laptopi (e.g. Dell XPS 13) with an eGPU

[Note:] The computing systems must support at least Thunderbolt3, though version 4 would be best for future-proofing.

Shows an Nvidia GTX 1060 in a Razer Core X Chroma eGPU enclosure Image: Original GTX 1060 GPU slotted in the Razer Core X Chroma enclosure

Background Research

Before starting on this endeavor, I did a lot of research to see how likely I’d be able to succeed. The two best sources I found was the eGPU.io site with many reviews and descriptions of how well specific configurations worked (or didn’t). They also have nice “best laptop for eGPU” and Best eGPU Enclosures matrices.

Nvidia drivers and Ubuntu

Installing Nvidia drivers under #Ubuntu is pretty straightforward these days, with a one-click install option built-in to the operating system itself. The user can choose between versions, and my research showed that most applications required either version 525 or 530. I installed 530.

eGPU information

The best two sources I found for information on configuring and using eGPUs were: – r/eGPU on reddit – their “so you’re thinking about an eGPU” guideegpu.io

Proof-of-concept

Having read a fair amount about the flakiness of certain #eGPU setups, I approached this project with a bit of caution. My older tower had a respectable, if aging, GTX 1060 6GB in it. Since I already had a recent Core i5 Intel NUC running Ubuntu and some test machine learning applications, so all I needed to fully test this was the enclosure. Researching the various enclosure options, I chose this one because: – the Razer Core X series appears to have some of the best out-of-the-box compatibility – I’ve been impressed with my aging Razer laptop, so I know they build quality components – The Chroma version has what is basically an USB hub in the back with 4 USB 3.x ports and an ethernet jack added to the plain Core X version My thinking was that this system could not only provide GPU, but also act as an easy dock-hub for my primary computers. This didn’t work out quite as I planned (more in the next post).

The included thunderbolt cable is connected from the NUC to the eGPU. Theoretically, the standard peripherals (keyboard, mouse, etc.) should be connected to the eGPU hub and everything will “just work”. However, in my testing, things worked best with the peripheral hub I use plugged into the NUC and only the #Thunderbolt cable plugged into the enclosure. In the spirit of IT troubleshooters everywhere: start by making the least amount of change and iterate from there.

Intel NUC on top of Razer Core X Chroma eGPU Image: Just the enclosure with a NUC on top.

Experience

The NUC was on Ubuntu 20.04. The drivers installed just fine, but the system just wouldn’t see the GPU. Doing some research, it looked like people were having better results with more recent versions of Ubuntu, so I did a quick sudo apt dist-upgrade and upgraded the system to 22.XX. The GPU worked! However, the advice I’d been given was to upgrade to 23.04, so I did that and still the system worked fine.

TL;DR

For the impatient, here is the final output from using ChatGPT to create service description documentation: – Final doc in Markdown format – (GitHub Gist) – Final doc in PDF format – (Box)

Overview

I’ve been working on building out an internal offensive security function and got to the point where I need some internal documentation as to the service(s) description, engagement model, outcomes, etc. Like a lot of planning, I started with an Xmind #MindMap, and with all the buzz around #ChatGPT, I wanted to see how well it could take what I have and build some docs for me.

In addition to Xmind, I use Obsidian for note-taking, jotting down thoughts, and organizing documentation. The object of this exercise will be to have #Markdown formatted text that I can make final edits in Obsidian and from there publish to a documentation repository.

ChatGPT Prompts

First prompt

I started with a pretty robust prompt. With the exception of the Objective paragraph at the top, this was 100% copy-paste from Xmind to the ChatGPT prompt.

Objective: I want to create a service description and engagement model for an internal red team. This service description should be formatted like a document with section headings and subheadings. Format the output in Markdown. The service components will be as follows: - the team will manage the scheduling process - the team will determine the activities that must be completed to adequately test the target - the team will determine whether a specific test should be carried out by an internal team or if an external testing firm needs to be engaged Pre-requisites for initiating a test include: - a system architecture diagram - a completed threat model document - access to the systems that will be tested The activities performed by this function will include the following items Document processes so they are Repeatable Defendable Create and perform Reporting with the following characteristics templates regular cadence tied-in to the risk review process findings go into backlogs, jiras, and project plans, depending upon the team and target demonstrated ability to remediate findings 

Engagement with developers must include 
  Regular sessions with R&D and IT teams that are developing tools and systems  
  Demonstrations 
  Threat modeling 
  
Engagement with System Operators must include 
  Educational review sessions with teams 
    system operators 
    Blue team 
    system integrators Demonstrations of exploit Collaborative remediation sessions Threat modeling

and the output ChatGPT output 1

Second prompt – measurable outcomes

Now I need to define some measurable objectives of the program in #OKR format so we can measure its success. The results were pretty meh. I’ve tried OKRs and #infosec success criteria in other chats and they pretty much all come out the same. This section had to be heavily edits foor the final. Luckily, I’d already outlined them in the mindmap.

ChatGPT OKR prompt and response

Prompts 3 thru 5 – flowchart

Now that I have a working service description and some needs-editing OKR (all a bunch of words no one will read), I need a flowchart diagram. I’m a fan of mermaid diagrams for their usefulness across various markup languages like Markdown and RestructuredText.

prompt

based upon the service description above, create a flowchart of the entire process beginning with scheduling and ending with concluding a set of tests. Output this flowchart in Mermaid syntax.

output

graph TD
A(Schedule Test)
B(Review Prerequisites)
C(Plan Activities)
D(Determine Testing Resources)
E(Perform Testing)
F(Record Findings)
G(Produce Report)
H(Engage with Developers and System Operators)
I(Conclude Tests)

A --> B
B --> C
C --> D
D --> E
E --> F
F --> G
G --> H
H --> E
H --> I


In this flowchart, the process begins with scheduling a test (Step A), followed by reviewing the prerequisites for the test (Step B). The team then plans the activities required for the test (Step C) and determines whether internal or external testing resources are required (Step D). The testing is then performed (Step E) and the findings are recorded (Step F). The team then produces a report (Step G) and engages with the developers and system operators (Step H) to remediate any issues. The process concludes with the finalization of the tests (Step I).

This was as decent start, and I had a couple iterations:

  1. adding a decision add a decision to the chart after step E in order to determine whether the completed tests are adequate to move on to testing completion
  2. inserting some more steps In the Mermaid flowchart above, tests need to be concluded prior to producing a report. Also, add a step called "Send to Product Management and Security Leadership" occuring in parallel with the "Engage with Developers and System Operators". Also, conclude with a step called "apply recommendations to target systems"

There were still some errors at this point, such as ChatGPT insisting on putting the “Conclude Tests” step at or very near the end, even though it’s really about the mid-point of the workflow. I could have continued the prompts, but at this point I decided to finish this off by hand.

Prompts 6 and 7 – RACI

Now that the processes are defined, we need to identify the responsibilities of each of the roles and how they interact.

ChatGPT prompt requesting a RACI for an offensive security program

From this, it’s clear that ChatGPT is good at formatting and generating a RACI matrix in #Markdown format, and this version has reasonable values for the roles, even if I don’t agree with them. Still, it was useful enough to have a good foundation that I could tweak, but before that, I added one last parameter:

add a new role to the matrix called Red Team Manager. This role should be responsible for teh scheduling and accountable for everything else. Add two new processes called Review Findings and Remediate Findings. The Developer is responsible for the former and the System Owner is responsible for the latter. Recreate the RACI matrix with these new parameters and output the Markdown code.

And this changed the RACI to basically make the manager accountable for everything.

Finishing up

At this point, I felt like I had the elements I needed, so I began the process of copy-pasting them from the interface into Obsidian and making tweaks to get a usable service description document.

The final output from using ChatGPT to create service description documentation: – Final doc in Markdown format – (GitHub Gist) – Final doc in PDF format – (Box)

Migrating PasswordSafe to KeepassXC

I’ve been a longtime user of #PasswordSafe (or, “PWsafe”), back since Bruce Schneier was managing authorship and maintenance. With all the issues experienced by online providers like LastPass and 1Password (but especially LastPass, by miles), I think the usage of a local password database with sync to a personal #NextCloud instance is the way to go. I’m happy with PWsafe; it’s worked well over the years, but I need to share a few passwords and would like some expanded functionality such as managing SSH keys, so I looked to #KeePassXC, which appears to be the most up-to-date and maintained branch of the KeePass and KeePassX family. KeePassXC is desirable because it is natively multi-platform, whereas the original KeePass is written for Windows, and emulators are required to use it on operating systems like Linux.

Importing passwords

There is no direct import from a PasswordSafe format to KeePass database format using KeePassXC like there is from LastPass to KeePass. A tab-delimited file can be exported from PWsafe, and KeePassXC can import a comma-delimited (“CSV”) file, however, I make heavy use of nested groups, and the work to prepare the CSV file looked like a major pain. Luckily, the original version of KeePass supports direct import from PWsafe.

Armed with that knowledge, this was my path to import my passwords 1. Open PasswordSafe and export the database in the XML format (be careful with this file and delete when done!) 2. Download latest KeePass 2.x from https://keepass.info/ 3. Open KeePass, create a new KeePass version 2database, and import the XML file 4. Export the file as KeePass version 1.x database format 5. Close KeePass 2.x 6. Open KeePassXC and create a new database in a temporary location (doesn’t matter, we wont’ use it) 7. Import the KeePass 1.x database with the passwords 8. When prompted, choose the location and name where you want the database 9. Done!

KeePass import dialogue box

Finishing Up

Make sure to explore the settings, such as adding a Yubikey and/or keyfile. When everyhing is as you want it and working, delete the interim files (XML, KP 1.x and 2.x databases), and make a plan to retire the old PasswordSafe data.

References