jpnix: (Default)
My best friend is an engineer at National Renewable Laboratory and is always trying to get me to go work there since I have an affinity for High Performance Computers (supercomputers) so for the fun of it I wanted to see where the HPCs his organization works on stand on a global scale of power which naturally led me to this. website

The problem with this website is you cannot search the listing for specific rankings, You can download an XML file to search through... but that requires you to create an account just to download... nah, too hard.

website showing the download links that require login

So lets extract the data ourselves.

First thing I noticed is that to navigate through the URLs for the various pages (100 computers per page) you have this html query paremeter https://www.top500.org/lists/top500/list/2022/11/?page=2 this makes it pretty easy to loop through the pages and download them 1 by 1 to store the data.

for i in {1..5}
do
     wget https://www.top500.org/lists/top500/list/2022/11/?page=$i
done.

This downloads the entire html pages with the data we want.
Terminal showing wget output

You'll get saved html files as such
index.html?page=1
index.html?page=2
index.html?page=3
index.html?page=4
index.html?page=5

Instead of having all these split up in multiple pages its better to add them together.
 cat index* >> top500.txt.


Now that we have the data we can parse through it, you can't really do this easily with BASH as it gets messy really fast but there is a nice python library called Beautiful Soup that we can use to parse out the html tags and nonsense we don't want.

sudo apt-get install python3-bs4

The source code is super simple, it takes in the file we combined above and parses out all the html tags and pushes the parsed text out to another file called top500_parsed.txt.
#!/usr/bin/env python3

from bs4 import BeautifulSoup

with open("top500.txt") as markup:
    soup = BeautifulSoup(markup.read())

with open("top500_parsed.txt", "w") as f: 
    f.write(soup.get_text())


There we have it. top500_parsed.txt contains the information I wanted and I didn't have to sign up for an account to get it.
jpnix: (Default)
You guys have probably noticed this warning at the top of your RDS page in the AWS Console.
aws console RDS CA cert warning

While it is fairly trivial to update all the CA certificates via the console it is really not ideal if you have a large amount of RDS instances running. With some shell parsing and aws cli you can update all of them in a matter of minutes.

Requirements.

  • Have AWS CLI installed. If you don't have it look at amazon documentation here to install

  • An AWS admin account (or appropriate permissions to modify RDS instances)

  • Corresponding AWS profile set up with your account keys


Code to run
for i in $(aws rds  describe-db-instances --profile=dev --region=us-east-1 | grep -i DBInstanceIdentifier | awk '{print $2}' | tr -cd "'[:alnum:]\-_+ \n"); 
do 
    aws rds modify-db-instance --db-instance-identifier $i --ca-certificate-identifier rds-ca-2019   --apply-immediately --profile=pl-dev --region=us-east-1; 
done


NOTE: This will require a reboot as it uses the apply immediately tag. If you can not have downtime in your environment you should run the maintenance cycle flag or wait until a defined maintenance window to run this.
jpnix: (Default)


When AWS Lambda doesn't provide the necessary computing power you need to run an application that finishes in a few hours and you don't want to pay for an EC2 Instance being up for 20 additional hours doing nothing what can you do to satisfy both needs?

Things you will need.

Infrastructure
  • A small management instance wrapped in an IAM role to allow EC2 instance launches from AWSCLI.
  • Another instance to host the application you need to run.
Scripts
  • Cron job entry on management instance to power on the instance where the application lives
  • Script to evaluate uptime and execute an application on the instance
  • Cron job entry to check the uptime execute script on the app instance
Code
On the management server you would add the following command to a crontab to start the instance when you want
30 09 * * * /usr/bin/aws ec2 start-instances --instance-ids i-instanceid --region us-east-1
On the application instance there is a script that checks the uptime of the server and launches any number of applications that need to be run with the idea they only need to be run once per day at a specified uptime.
TIME=$1
UPTIME=$(awk '{print $0/60;}' /proc/uptime | grep -oP '.*?(?=\.)')

echo $UPTIME

if [ $UPTIME -eq $TIME ]; then
	echo "Launching script at $(date)"
        /home/user/documents/script.sh
        echo "Launching application at $(date)"
	/usr/bin/app --parameter --stuff
else
	exit
fi
While the instance is turned on cron is checking once per minute if the script above matches the uptime value given as a parameter to the script. If there is a match, in this case 3 minutes it executes the applications/scripts and powers the instance off.
  • Note: If you run this script under any user other than root you should add a visudo entry to execute the shutdown command with no password.
The cron entry to control this script on the application server looks like the following
* * * * * user /path/to/schedule_script 3 &>> /tmp/scheduler.log && sudo /sbin/shutdown -h now
And thats it. A simple way to schedule applications to run at specific times but not waste resources by having infrastructure on any longer than needs to be to run the job(s)
jpnix: (Default)


I've always been a big fan of editing my shell to display custom ascii art for as long as I could remember. There is something fun about being greeted by a familiar artistic image before diving into work.

nyan cat ascii shell

One thing I've really been interested in trying is creating animated ascii art displayed in my shell, and so thats what I did.

First thing I did was find a gif that I liked and was simple enough to use as animated ascii art.
I ended up settling on ...an animated chicken.



Now that I had my gif I needed to extract each frame from it which I did with the following tool
https://www.gif-explode.com/



I created a folder to store all the gif frames into for easy organization then one by one converted them into ascii art images.

http://picascii.com/

Now that I had the ascii art I needed it was time to edit my
~/.bashrc

I created a function at the end of the ~/.bashrc that looped through the contents of the ascii art based on number of frames.

function animate_chicken {
    for i in {1..17}
    do
      cat /home/james/chicken/$i.txt | lolcat
      sleep 0.1
      clear
    done
}


animate_chicken


After you save your ~/.bashrc file you need to source it source ~/.bashrc and you will now have an animated ascii prompt when you open your terminal. There are many other things you can do within the loop to make the animation better like give certain frames more or less sleep time to show speed variation etc..


Final product
jpnix: (Default)
We do quite a lot of things through proxies at my place of employment. Mainly so we don't get blacklisted as the behavior of our malware/phish processing systems often times looks like we're doing some pretty shady stuff.

I had a use case where one of our nagios checks was designed to hit a vendor API endpoint from our local on box proxy via specific ports and determine if the endpoint was reachable. This caused some issues as sometimes the endpoint would be down for a very small window of time, or the proxy was being derp for some reason - either way, I was getting up in the middle of the night for false positives. This makes for an angry systems engineer so I thought I'd just rewrite the check.

It may become useful for anyone that needs to check the state of something multiple times before sending off the nagios exit response. Instead of checking for a failure once and alerting it checks for 3 consecutive failures.



The "Nagios Plugin" script
#!/bin/bash

#Varaible initilization
http_response=$(curl -s -o /dev/null -w "%{http_code}" --proxy localhost:$1 'http://API.endpoint.com')
frequency=0

#if the http response isn't 200 it will check 3 consecutive times for a change. If no change occurs it will increment a flag for each failure.
while [ "$http_response" != 200 ]
do
  echo "$http_response"
  http_response=$(curl -s -o /dev/null -w "%{http_code}" --proxy localhost:$1 'http://API.endpoint.com')
  ((frequency++))
  if [ "$frequency" -eq 3 ]; then break
  fi
  sleep 60
done

#Compare the flag value. if it is less than 3 the check corrected itself and prevented false positive, otherwise its probably a real alert.
if [ "$frequency" -eq 3 ]; then
        echo  "Port $1 not reachable - $http_response response"
        exit 2
else
        echo "Port $1 reachable"
        exit 0
fi



NRPE definition
command[check_endpoint_through_proxy]=/usr/lib64/nagios/plugins/check_endpoint_proxy "27845"

Nagios definition

define service{
use remote-service,srv-pnp
host_name server.nrpe.response
service_description Endpoint Local Proxy Connection
contact_groups emailadmins
max_check_attempts 3
check_command check_nrpe!check_endpoint_through_proxy
jpnix: (Default)
There are a few moving parts to getting ClamAV installed and set up correctly to scan weekly so I threw together a useful script that does it automatically on any redhat based machine.

jpnix: (Default)
The bulk of my day job is actually analyzing phish, phishkits, drop scripts etc.. Lately we have ran into an issue where the phishing campaign is only accepting local ip's to view the phishing content and blocking out everything else in the httaccess.

For this reason I wrote a little utility that would allow us to check to see if we get any kind of response from the phish based on the geographic location of a proxy connection.







#!/bin/bash

echo " "
echo "------------------------------"
echo " GeoBlocked? "
echo "------------------------------"
echo " "
echo "Enter proxy list file name, if not in same directory provide full path: "
read LIST
echo "Enter URL to see if its being geoblocked"
read URL
echo " "
echo "Checking status of: $URL This could take some time"
echo " "
echo " "

PROXY="$(< "$LIST")"
red=`tput setaf 1`
green=`tput setaf 2`
reset=`tput sgr0`

function url_check()
{
export http_proxy="http://$i"

status="$(curl --max-time 15 --connect-timeout 15 -s -o /dev/null -I -w '%{http_code}' $URL)"
country="$(curl -s http://whatismycountry.com/ | sed -n 's|.*,\(.*\)|\1|p')"
DOWN="$(echo "${red} $i - URL IS DOWN - $country ${reset}")"
UP="$(echo "${green}$i - URL IS UP - $country ${reset}")"
TIMEOUT="$(echo "${red}$i - Proxy connection took too long${reset}")"

case "$status" in
"200") echo "$UP";;
"201") echo "$UP";;
"202") echo "$UP";;
"203") echo "$UP";;
"204") echo "$UP";;
"400") echo "$DOWN";;
"401") echo "$DOWN";;
"402") echo "$DOWN";;
"403") echo "$DOWN";;
"404") echo "$DOWN";;
"500") echo "$DOWN";;
"501") echo "$DOWN";;
"503") echo "$DOWN";;
*) echo "$TIMEOUT";;
esac
unset http_proxy;
}

for i in $PROXY; do
url_check $i
done



jpnix: (Default)



After moving to a new city and finding out a little less than a decade ago the neighborhood my lady and I moved to was wrought with drugs, prostitution and theft I let the paranoia set in despite the fact we have had no problems and the neighbors have been exceptionally nice.

No matter, I'm an engineer, I solve problems - right?

So with a spare hour I set off to build a solution to help ease my paranoia of the house getting broken into while we are gone.

This is the result of a super simple webcam recording security solution for Linux based operating systems.

Things you will need.



  1. A webcam

  2. Linux OS (I used xubuntu 14.04)

  3. streamer software

  4. Dropbox



After gathering the materials first thing you will need is to install streamer, the software we will be manipulating to record our surroundings.

sudo apt-get install streamer

After that create a DropBox account if you do not already have one and install it on your system. The default install directory is set to your home directory.

You can simply install the deb or follow the command line guide here for installation.
Dropbox install for linux

I chose to do this step because I figured if someone was in your house they will probably take the computer that is performing the recording and if you store them locally the effort is meaningless.

Once you have everything installed create a new directory under ~/Dropbox or the path where you installed. I named mine "security" this will sync up to your dropbox for review anywhere.

Lastly the brains of the operation, a super tiny, simple "script" that does the work. Copy the code and save as a shell script, I named mine "monitor.sh"



#!/bin/bash

#change to your dropbox directory you created to store pics
cd ~/Dropbox/security;

#continuous loop 
for (( ; ; ))
do  
    #streamer software takes pics and stores chronologically
	streamer -c /dev/video0 -o "$i".jpeg
	#this just waits 20 seconds before running again
	#3 pics per minute, can change to however long you want  
	sleep 20
done


There you have it. Now you just call the script and it will run.

./monitor.sh

Now anything that shows up in that directory will be synced to your drop box, and while you may still get robbed, at least you have something to show police and hopefully some form of justice.




Thoughts for improvement.


  • Get off of Dropbox - upload to a remote server instead

  • Set this up on multiple raspberry PI's with wifi to monitor outside entry points

Profile

jpnix: (Default)
JPNIX

January 2023

S M T W T F S
1234567
8910111213 14
15161718 192021
222324252627 28
293031    

Syndicate

RSS Atom

Most Popular Tags

Style Credit

Expand Cut Tags

No cut tags
Page generated Oct. 7th, 2025 11:27 pm
Powered by Dreamwidth Studios