jpnix: (Default)
My best friend is an engineer at National Renewable Laboratory and is always trying to get me to go work there since I have an affinity for High Performance Computers (supercomputers) so for the fun of it I wanted to see where the HPCs his organization works on stand on a global scale of power which naturally led me to this. website

The problem with this website is you cannot search the listing for specific rankings, You can download an XML file to search through... but that requires you to create an account just to download... nah, too hard.

website showing the download links that require login

So lets extract the data ourselves.

First thing I noticed is that to navigate through the URLs for the various pages (100 computers per page) you have this html query paremeter https://www.top500.org/lists/top500/list/2022/11/?page=2 this makes it pretty easy to loop through the pages and download them 1 by 1 to store the data.

for i in {1..5}
do
     wget https://www.top500.org/lists/top500/list/2022/11/?page=$i
done.

This downloads the entire html pages with the data we want.
Terminal showing wget output

You'll get saved html files as such
index.html?page=1
index.html?page=2
index.html?page=3
index.html?page=4
index.html?page=5

Instead of having all these split up in multiple pages its better to add them together.
 cat index* >> top500.txt.


Now that we have the data we can parse through it, you can't really do this easily with BASH as it gets messy really fast but there is a nice python library called Beautiful Soup that we can use to parse out the html tags and nonsense we don't want.

sudo apt-get install python3-bs4

The source code is super simple, it takes in the file we combined above and parses out all the html tags and pushes the parsed text out to another file called top500_parsed.txt.
#!/usr/bin/env python3

from bs4 import BeautifulSoup

with open("top500.txt") as markup:
    soup = BeautifulSoup(markup.read())

with open("top500_parsed.txt", "w") as f: 
    f.write(soup.get_text())


There we have it. top500_parsed.txt contains the information I wanted and I didn't have to sign up for an account to get it.
jpnix: (Default)
Update Jan 17:

tldr with intel cpu:
 fwupdmgr reinstall $(fwupdmgr get-devices | grep "Management Engine" -A 10 | grep "Device ID:" | awk '{print $4}')


While the workaround below is cool, its just a workaround. Since I knew the issue was firmware related I poked around more with the error and came to the conclusion that all you need to do is reinstall the firmware since that is what got borked.

Use the linux firmware update manager to find the (in my case) intel management engine.
 fwupdmgr get-devices | grep "Management Engine" -A 10 | grep "Device ID:" 

Take the number generated and simply reinstall it
fwupdmgr reinstall (device ID from above command)

Reboot your machine, wait for the update.. be patient it takes some minutes.

Sound works correctly again with 0 workarounds.


###########################################################################


Yesterday, Jan 13, 2023 I updated the firmware on my Pop-OS machine per normal.. except this update was anything but normal. After 3 hours post update and reboot - my machine just sat idle showing nothing but a slightly illuminated black screen. I decided to power down my machine manually by pressing the power button. After performing the hard reboot my computer simply did not respond at all to physical power button touch, so I thought I had bricked my machine. Alas I took apart the laptop chassis, unseated the battery and unplugged the CMOS for a few minutes and my machine was at least able to be powered on via the button again.

Picture of chassi-less lenovo laptop

I thought all was well and good until I realized 15 minutes before a zoom interview that my sound did not work. My sound card wasn't being recognized at all. only "dummy output" when I checked in pop sound settings. After checking some logs and dmesg output I found this bit of information, the smoking gun.

$ aplay -l
aplay: device_list:274: no soundcards found...


[   12.661441] sof-audio-pci-intel-cnl 0000:00:1f.3: cl_dsp_init: timeout with rom_status_reg (0x80000) read 
[   12.661445] sof-audio-pci-intel-cnl 0000:00:1f.3: ------------[ DSP dump start ]------------ 
[   12.661447] sof-audio-pci-intel-cnl 0000:00:1f.3: Boot iteration failed: 3/3 
[   12.661447] sof-audio-pci-intel-cnl 0000:00:1f.3: fw_state: SOF_FW_BOOT_IN_PROGRESS (2) 
[   12.661455] sof-audio-pci-intel-cnl 0000:00:1f.3: 0x06000021: module: ROM, state: CSE_IPC_RESET_PHASE_1, waiting for: CSE_CSR, running 
[   12.661465] sof-audio-pci-intel-cnl 0000:00:1f.3: extended rom status:  0x6000021 0x0 0x0 0x0 0x0 0x0 0x1811102 0x0 
[   12.661466] sof-audio-pci-intel-cnl 0000:00:1f.3: ------------[ DSP dump end ]------------ 
[   12.661936] sof-audio-pci-intel-cnl 0000:00:1f.3: error: dsp init failed after 3 attempts with err: -110 
[   12.661946] sof-audio-pci-intel-cnl 0000:00:1f.3: Failed to start DSP 
[   12.661947] sof-audio-pci-intel-cnl 0000:00:1f.3: error: failed to boot DSP firmware -110 
[   12.712561] sof-audio-pci-intel-cnl 0000:00:1f.3: error: hda_dsp_core_reset_enter: timeout on HDA_DSP_REG_ADSPCS read 
[   12.712564] sof-audio-pci-intel-cnl 0000:00:1f.3: error: dsp core reset failed: core_mask f 
[   12.712720] sof-audio-pci-intel-cnl 0000:00:1f.3: error: sof_probe_work failed err: -110

The intel SOF audio driver was just not even being loaded, and instead of fighting with that particular firmware I decided to just force the kernel to use a different one. Here is how.

Install ALSA
 sudo apt-get install alsa 


Create a file in your kernel module directory ending in <.conf>. Files in this directory can be used to pass module settings to udev, which will use modprobe to manage the loading of the modules during system boot.
/etc/modprobe.d/something.conf

The file you created should be populated with the following kernel command, basically loading a completely different intel driver on boot.
 options snd-intel-dspcfg dsp_driver=1 

After you've saved your conf file you need to run the dracut command. dracut creates an initial image used by the kernel for preloading the block device modules (such as IDE, SCSI or RAID or in this case sound firmware) which are needed to access the root filesystem, mounting the root filesystem and booting into the real system.

At boot time, the kernel unpacks that archive into RAM disk, mounts and uses it as initial root file system. All finding of the root device happens in this early userspace.
sudo dracut --force

Now simply reboot your machine and when you check your pop-OS sound settings you'll notice "dummy output" has been changed to this.
Screenshot of new sound output "speakers built-in audio"

Congrats you have sound again.

https://www.youtube.com/watch?v=SkH3MTG2aAE
jpnix: (Default)
OSSEC HIDS is the worlds most widely used host intrusion detection system. It works great, has a lot of features and is fully open source. It does not however play nice with amazon linux 1. After a lot of trial and error I developed a script that can be used to install ossec hids on both amazon linux 1 and 2. You can also place this script on your servers and run them via ansible to auotmate the installation process across thousands of instances.

#!/bin/bash
#jmp@linuxmail.org
#automatically install ossec hids server

EMAIL=$1

cd;
wget https://github.com/ossec/ossec-hids/archive/3.6.0.tar.gz
tar -xzvf 3.6.0.tar.gz
yum -y install gcc libevent-devel.x86_64 zlib-devel.x86_64 openssl-devel.x86_64 postfix mailx


cd ossec-hids-*
wget https://ftp.pcre.org/pub/pcre/pcre2-10.32.tar.gz
tar xzf pcre2-10.32.tar.gz -C src/external

printf '\n\nserver\n\n\n$EMAIL\nn\nlocalhost\ny\ny\nn\n\n' | PCRE2_SYSTEM=no /root/ossec-hids-3.6.0/install.sh

    /var/ossec/bin/ossec-control start




Save the script and run it as
./auto_ossec.sh email_address
jpnix: (Default)
You guys have probably noticed this warning at the top of your RDS page in the AWS Console.
aws console RDS CA cert warning

While it is fairly trivial to update all the CA certificates via the console it is really not ideal if you have a large amount of RDS instances running. With some shell parsing and aws cli you can update all of them in a matter of minutes.

Requirements.

  • Have AWS CLI installed. If you don't have it look at amazon documentation here to install

  • An AWS admin account (or appropriate permissions to modify RDS instances)

  • Corresponding AWS profile set up with your account keys


Code to run
for i in $(aws rds  describe-db-instances --profile=dev --region=us-east-1 | grep -i DBInstanceIdentifier | awk '{print $2}' | tr -cd "'[:alnum:]\-_+ \n"); 
do 
    aws rds modify-db-instance --db-instance-identifier $i --ca-certificate-identifier rds-ca-2019   --apply-immediately --profile=pl-dev --region=us-east-1; 
done


NOTE: This will require a reboot as it uses the apply immediately tag. If you can not have downtime in your environment you should run the maintenance cycle flag or wait until a defined maintenance window to run this.
jpnix: (Default)
Just a couple days ago Amazon finally released a Linux client for their virtual desktop offering Workspaces. Only problem with that is it is currently only supported on Debian based Distros, leaving out a ton of users and businesses that rely on RPM based packaging and distributions.

This guide is to help remedy that and allow you to install Amazon Workspaces on your RPM based distro, in this case CentOS 7.


Requirements and dependencies



sudo yum -y install http://repo.okay.com.mx/centos/7/x86_64/release/okay-release-1-1.noarch.rpm
sudo yum -y install alien rpmbuild openssl11-libs


In addition to that we will need to install an updated version of gcc as the one on stock centos 7 does not have the required versions of GLIBC in libstdc++.so.6. You can compile this yourself which takes a good amount of time, or install the packaged RPM version from here

sudo yum install ./gcc73-c++-7.3.0-1.el7.x86_64.rpm


Finally we will need the amazon provided workspaces package which you can download from here

Converting from DEB to RPM



There is a really great tool called Alien that converts packages from deb to rpm, and rpm to deb. This is what we will be using to do the heavy lifting.

sudo alien -r -g -v workspacesclient_amd64.deb


These flags will generate the RPM folder from the deb package but will not actually generate the full RPM package yet. We need to edit a few lines within the spec file before building the RPM package otherwise we will have issues with path definitions already provided by the filesystem.


sudo sed -i 's#%dir "/"##' workspacesclient-3.0.1.234-2.spec
sudo sed -i 's#%dir "/usr/bin/"##' workspacesclient-3.0.1.234-2.spec


Now that the spec file for the RPM package is fixed we need to finish the build process and make it a real package. buildroot needs to be the full path of where the generated folder with the spec file is.


sudo rpmbuild --buildroot='/home/jp/workspacesclient-3.0.1.234' -bb --target x86_64 'workspacesclient-3.0.1.234-2.spec' 2>&1



Install the RPM


Now you should see a shiny new workspacesclient-3.0.1.234-2.x86_64.rpm package waiting to be installed.

sudo rpm -ivh workspacesclient-3.0.1.234-2.x86_64.rpm

The newly created package should install with no issue and you'll be able to launch the workspaces application.

Automate the install


If you're not worried about packaging up your own RPM from the converted deb you can use this script to do everything for you. Just make it executable and run.

#!/bin/bash
#jmp@linuxmail.org
#install script for amazon workspaces on RPM based machines.

setup()
{
    mkdir workspaceclient;
    cd workspaceclient;
}

cleanup()
{
  rm -rf *.rpm
  cd ../; rmdir workspaceclient
}

download_files()
{
   wget https://amazonworkspace-centos7-files-dd8463a2e6024792.s3.amazonaws.com/gcc73-c__-7.3.0-1.el7.x86_64.rpm;
   wget https://amazonworkspace-centos7-files-dd8463a2e6024792.s3.amazonaws.com/workspacesclient-3.0.1.234-2.x86_64.rpm
}

install()
{
    yum -y install http://repo.okay.com.mx/centos/7/x86_64/release/okay-release-1-1.noarch.rpm
    yum -y install openssl11-libs
    for i in $(ls *.rpm);
    do
            echo $i;
            yum -y install ./$i 
    done            
}

main()
{
   setup
   download_files
   install
   cleanup
}

main


Amazon workspaces running on centos7
jpnix: (Default)



If you've ever wanted to change your default splash screen on linux here is how to do it.

The splash screen files are here.
/usr/share/plymouth/themes/

Note that the software that runs the splash screen is plymouth.
wiki.debian.org/plymouth

Once you're in the directory find your default theme. For debian its debian-theme

The easiest way to get running quickly is to just overwrite the main png files in this directory. Take an image, edit it save it as a png and move it to this directory with the same name as the files that are expected. You may want to backup this directory if you want to go back.

Once files are in place run the following command as root


update-initramfs -u

In addition to change the grub background image simply run
update-grub

It should generate the location of background images.

Found background image: /usr/share/images/desktop-base/desktop-grub.png
Found background image: /opt/artwork/debian-grub.png


Overwrite those files with the same method.
jpnix: (Default)
Polybar

Downloading and installing polybar

wget http://download.opensuse.org/repositories/home:/sysek/openSUSE_Leap_15.0/x86_64/polybar-3.3.0-lp150.7.1.x86_64.rpm

use zypper to install package and dependencies

sudo zypper in polybar-3.3.0-lp150.7.1.x86_64.rpm

Polybar Configuration

make a polybar directory in ~/.config

copy the example polybar config into that directory

cp /usr/share/doc/polybar/config ~/.config/polybar/config

run the following command to see it load

polybar example

*note the font glyphs dont work so you will need to install the siji fonts
 
Installing glyph font

git clone https://github.com/stark/siji.git

cd siji; chmod +x install.sh; ./install.sh

get xfd and make sure its installed

software.opensuse.org/ymp/home:pontostroy:X11/openSUSE_Leap_42.2/xfd.ymp
 
See where exactly the font is on your system because this will most likely be different that whats in your configuration file.

fc-list | grep siji

Mine returned the following Wuncon Siji:style=Regular

You will then need to edit the config file that you copied over here ~/.config/polybar/config.

change the font type directive from
font-2 = siji:pixelsize=10;1 to font-2 = Wuncon Siji:style=Regular:pixelsize=10;1


FIN.

run polybar example again and now you will see the glyphs displayed.

 

jpnix: (Default)
A few months ago Linux Rig was asking for submissions to show on their site from linux professionals. Here is my quick interview. albeit I have since moved on to a much different distro/configuration.

https://linuxrig.com/2019/02/27/the-linux-setup-james-permenter-systems-engineer/
jpnix: (Default)


When AWS Lambda doesn't provide the necessary computing power you need to run an application that finishes in a few hours and you don't want to pay for an EC2 Instance being up for 20 additional hours doing nothing what can you do to satisfy both needs?

Things you will need.

Infrastructure
  • A small management instance wrapped in an IAM role to allow EC2 instance launches from AWSCLI.
  • Another instance to host the application you need to run.
Scripts
  • Cron job entry on management instance to power on the instance where the application lives
  • Script to evaluate uptime and execute an application on the instance
  • Cron job entry to check the uptime execute script on the app instance
Code
On the management server you would add the following command to a crontab to start the instance when you want
30 09 * * * /usr/bin/aws ec2 start-instances --instance-ids i-instanceid --region us-east-1
On the application instance there is a script that checks the uptime of the server and launches any number of applications that need to be run with the idea they only need to be run once per day at a specified uptime.
TIME=$1
UPTIME=$(awk '{print $0/60;}' /proc/uptime | grep -oP '.*?(?=\.)')

echo $UPTIME

if [ $UPTIME -eq $TIME ]; then
	echo "Launching script at $(date)"
        /home/user/documents/script.sh
        echo "Launching application at $(date)"
	/usr/bin/app --parameter --stuff
else
	exit
fi
While the instance is turned on cron is checking once per minute if the script above matches the uptime value given as a parameter to the script. If there is a match, in this case 3 minutes it executes the applications/scripts and powers the instance off.
  • Note: If you run this script under any user other than root you should add a visudo entry to execute the shutdown command with no password.
The cron entry to control this script on the application server looks like the following
* * * * * user /path/to/schedule_script 3 &>> /tmp/scheduler.log && sudo /sbin/shutdown -h now
And thats it. A simple way to schedule applications to run at specific times but not waste resources by having infrastructure on any longer than needs to be to run the job(s)
jpnix: (Default)
We do quite a lot of things through proxies at my place of employment. Mainly so we don't get blacklisted as the behavior of our malware/phish processing systems often times looks like we're doing some pretty shady stuff.

I had a use case where one of our nagios checks was designed to hit a vendor API endpoint from our local on box proxy via specific ports and determine if the endpoint was reachable. This caused some issues as sometimes the endpoint would be down for a very small window of time, or the proxy was being derp for some reason - either way, I was getting up in the middle of the night for false positives. This makes for an angry systems engineer so I thought I'd just rewrite the check.

It may become useful for anyone that needs to check the state of something multiple times before sending off the nagios exit response. Instead of checking for a failure once and alerting it checks for 3 consecutive failures.



The "Nagios Plugin" script
#!/bin/bash

#Varaible initilization
http_response=$(curl -s -o /dev/null -w "%{http_code}" --proxy localhost:$1 'http://API.endpoint.com')
frequency=0

#if the http response isn't 200 it will check 3 consecutive times for a change. If no change occurs it will increment a flag for each failure.
while [ "$http_response" != 200 ]
do
  echo "$http_response"
  http_response=$(curl -s -o /dev/null -w "%{http_code}" --proxy localhost:$1 'http://API.endpoint.com')
  ((frequency++))
  if [ "$frequency" -eq 3 ]; then break
  fi
  sleep 60
done

#Compare the flag value. if it is less than 3 the check corrected itself and prevented false positive, otherwise its probably a real alert.
if [ "$frequency" -eq 3 ]; then
        echo  "Port $1 not reachable - $http_response response"
        exit 2
else
        echo "Port $1 reachable"
        exit 0
fi



NRPE definition
command[check_endpoint_through_proxy]=/usr/lib64/nagios/plugins/check_endpoint_proxy "27845"

Nagios definition

define service{
use remote-service,srv-pnp
host_name server.nrpe.response
service_description Endpoint Local Proxy Connection
contact_groups emailadmins
max_check_attempts 3
check_command check_nrpe!check_endpoint_through_proxy
jpnix: (Default)
There are a few moving parts to getting ClamAV installed and set up correctly to scan weekly so I threw together a useful script that does it automatically on any redhat based machine.

jpnix: (Default)
Nagios Hearts AWS



Good ol' Nagios. The monitoring system that just won't die and for good reason it does what we want and does it well.

I found there were not very many good resources on setting up nagios alerts to publish to SNS topics for SMS subscriptions so I did the leg work and am presenting it here.

Note: Its expected that you already have Nagios set up on a server and understand at least the basics of configuration definitions. 

Setting up the SNS topic in AWS

The first step we need to do is actually set up the SNS topic via the AWS dashboard.

create the sns topic


Make sure you get the Topic ARN after creating it, we will need to create a service account and IAM policy to publish the nagios alerts to the topic. It will look like arn:aws:sns:us-east-1:101010101010:nagios-publish where 101010101010 is your account number.

While you're in the Topics dashboard create subscriptions for the engineers that may want to be alerted to nagios alerts by clicking the "create subscription" button

create sms subscription to the sns topic

go to https://console.aws.amazon.com/iam/home?region=us-east-1#/users and create the user that will have access to publish. I always use explicit naming conventions to easily manage them via IAM. In this case it will be service.nagios. Save the access and secret key values that are created for the user.

Now create a policy policy.nagios-sns with the following permissions.

{
    "Version": "2012-10-17",
    "Statement": [
        {
            
            "Effect": "Allow",
            "Action": [
                "sns:Publish"
            ],
            "Resource": [
                "arn:aws:sns:us-east-1:101010101010:nagios"
            ]
        }
    ]
}



Nagios Integration



first thing you are going to want to do is add the aws_access_key_id and aws_secret_access_key to the server nagios is on under the nagios user (or whatever user the nagios service was set up to run under)

By default the user that runs the nagios service has its shell set to /bin/nologin but you can get around that by issuing as root su - nagios -s /bin/bash

If you're already on an AWS instance you can just issue command aws configure if nagios is not running in amazon install the awscli client by following steps here http://docs.aws.amazon.com/cli/latest/userguide/installing.html once you add the keys they are stored in ~nagios/.aws/credentials and the policy you created for the service account tied to these keys will be permitted.

We are going to create a separate contact template and command inside our nagios configuration just for being alerted via SNS. Navigate to the objects directory inside your nagios installation. my path is /usr/local/nagios/etc/objects and edit the file templates.cfg in search for generic-contact we are pretty much copying everything from that generic contact default entry and just changing names around and passing a different command. The bottom is our copied and edited entry that we need named sns-contact.


# Generic contact definition template - This is NOT a real contact, just a template!

define contact{
        name                            generic-contact         ; The name of this contact template
        service_notification_period     24x7                    ; service notifications can be sent anytime
        host_notification_period        24x7                    ; host notifications can be sent anytime
        service_notification_options    w,u,c,r,f,s             ; send notifications for all service states, flapping events, and scheduled downtime events
        host_notification_options       d,u,r,f,s               ; send notifications for all host states, flapping events, and scheduled downtime events
        service_notification_commands   notify-service-by-email ; send service notifications via email
        host_notification_commands      notify-host-by-email    ; send host notifications via email
        register                        0                       ; DONT REGISTER THIS DEFINITION - ITS NOT A REAL CONTACT, JUST A TEMPLATE!
        }


define contact{
        name                            sns-contact         ; The name of this contact template
        service_notification_period     24x7                    ; service notifications can be sent anytime
        host_notification_period        24x7                    ; host notifications can be sent anytime
        service_notification_options    w,u,c,r,f,s             ; send notifications for all service states, flapping events, and scheduled downtime events
        host_notification_options       d,u,r,f,s               ; send notifications for all host states, flapping events, and scheduled downtime events
        service_notification_commands   notify-service-by-sns ; send service notifications via email
        host_notification_commands      notify-host-by-sns    ; send host notifications via email
        register                        0                       ; DONT REGISTER THIS DEFINITION - ITS NOT A REAL CONTACT, JUST A TEMPLATE!
        }


The real magic behind the nagios aws sns integration is within the command itself. Its how we publish alerts to the topic.

edit the configuartion file /usr/local/nagios/etc/objects/commands.cfg and add this under sample notification commands.

# 'notify-host-by-sns' command definition
define command{
        command_name    notify-host-by-sns
        command_line    /usr/bin/printf "%b" "***** Nagios *****\n\nNotification Type: $NOTIFICATIONTYPE$\nHost: $HOSTNAME$\nState: $HOSTSTATE$\nAddress: $HOSTADDRESS$\nInfo: $HOSTOUTPUT$\n\nDate/Time: $LONGDATETIME$\n" | aws sns publish --topic-arn arn:aws:sns:us-east-1:101010101010:nagios --message "$NOTIFICATIONTYPE$ Host Alert: $HOSTNAME$ is $HOSTSTATE$"
        }

# 'notify-service-by-sns' command definition
define command{
        command_name    notify-service-by-sns
        command_line    /usr/bin/printf "%b" "***** Nagios *****\n\nNotification Type: $NOTIFICATIONTYPE$\n\nService: $SERVICEDESC$\nHost: $HOSTALIAS$\nAddress: $HOSTADDRESS$\nState: $SERVICESTATE$\n\nDate/Time: $LONGDATETIME$\n\nAdditional Info:\n\n$SERVICEOUTPUT$\n" |  aws sns publish --topic-arn arn:aws:sns:us-east-1:101010101010:nagios --message "$NOTIFICATIONTYPE$ Service Alert: $HOSTALIAS$/$SERVICEDESC$ is $SERVICESTATE$"
        }


We're almost done now. The last thing we have to do is define a contact group and contact to inherit the sns-contact object definitions.

Wherever you have your contacts defined add the entry

#################################################################################
# SMS  Admin contact group                                                      #
#################################################################################

define contactgroup{
        contactgroup_name       smsadmins
        alias                   SMS Nagios Administrators
        members                 sms-distro
        }



define contact{
        contact_name                    sns            ; Short name of user
        use                             sns-contact         ; Inherit default values from generic-contact template (defined above)
        alias                           SNS Alert          ; Full name of user
        service_notification_options    c,r
        }



From here on out when you define a new server check that you want to get SMS notifications for you would just add the contact group we just created to the check.

# Define a service to "ping" the local machine

define service{
        use                             remote-service,srv-pnp         ; Name of service template to use
        host_name                       serverName
        service_description             PING
        contact_groups                  snsgroup,emailadmins
	check_command			check_ping!250.0,20%!500.0,60%
        }


Finally after all the work you can now publish nagios alerts to the SNS topic and get those alerts through SMS subscription.



SNS topic to SMS subscription nagios alerts
jpnix: (Default)
As a linux geek I thought it would be kind of fun just to recreate some basic core tools I use every day, pretty much to see if it could be done and to compare my solution to whats actually implemented. I started off with something super simple the unix "cat" command which i'm sure you know, just displays the contents of a file in your shell.
It seemed uber simple and didn't take very long at all, but I have recreated it to an extent and while its not nearly is in depth or efficient as the real implementation of cat it was a learning experience.
 
#include stdio.h      

int main(int argc, char* argv[])  
{ 	 
int c; 	
FILE *fp; 	 
fp = fopen(argv[1], "r"); 	 
if(fp == NULL) 	 
{ 		   
printf("Can't open file, does it exist?\n"); 	   
return 1; 	 
} 	 
else 	 
{ 		 
while ((c = getc(fp)) != EOF) 	     
putchar(c); 	 
} 	
fclose(fp); 	 
return 0;  } 

When I compared my implementation to the one actually found in the coreutlis linux library I was a little surprised. The actual cat command is 768 lines of code whereas my toy cat is a whopping 20, keep in mind though I did not have add any flag use and its in no way is optimized. I was happy that the handling of the command line arguments was the same (how could it not be? derp

check out the full source of the cat command Here

Earlier in the week I talked about doing some MOOCs based on the google guide to technical development because I really want to be a good software engineer. I found one that while not on google's list is still pretty badass because you work in C which is the language I want to become super proficient in anyways. Its called CS50 and its a harvard course, so far blazing through it and on week 3. CS50
jpnix: (Default)
The bulk of my day job is actually analyzing phish, phishkits, drop scripts etc.. Lately we have ran into an issue where the phishing campaign is only accepting local ip's to view the phishing content and blocking out everything else in the httaccess.

For this reason I wrote a little utility that would allow us to check to see if we get any kind of response from the phish based on the geographic location of a proxy connection.







#!/bin/bash

echo " "
echo "------------------------------"
echo " GeoBlocked? "
echo "------------------------------"
echo " "
echo "Enter proxy list file name, if not in same directory provide full path: "
read LIST
echo "Enter URL to see if its being geoblocked"
read URL
echo " "
echo "Checking status of: $URL This could take some time"
echo " "
echo " "

PROXY="$(< "$LIST")"
red=`tput setaf 1`
green=`tput setaf 2`
reset=`tput sgr0`

function url_check()
{
export http_proxy="http://$i"

status="$(curl --max-time 15 --connect-timeout 15 -s -o /dev/null -I -w '%{http_code}' $URL)"
country="$(curl -s http://whatismycountry.com/ | sed -n 's|.*,\(.*\)|\1|p')"
DOWN="$(echo "${red} $i - URL IS DOWN - $country ${reset}")"
UP="$(echo "${green}$i - URL IS UP - $country ${reset}")"
TIMEOUT="$(echo "${red}$i - Proxy connection took too long${reset}")"

case "$status" in
"200") echo "$UP";;
"201") echo "$UP";;
"202") echo "$UP";;
"203") echo "$UP";;
"204") echo "$UP";;
"400") echo "$DOWN";;
"401") echo "$DOWN";;
"402") echo "$DOWN";;
"403") echo "$DOWN";;
"404") echo "$DOWN";;
"500") echo "$DOWN";;
"501") echo "$DOWN";;
"503") echo "$DOWN";;
*) echo "$TIMEOUT";;
esac
unset http_proxy;
}

for i in $PROXY; do
url_check $i
done



jpnix: (Default)



After moving to a new city and finding out a little less than a decade ago the neighborhood my lady and I moved to was wrought with drugs, prostitution and theft I let the paranoia set in despite the fact we have had no problems and the neighbors have been exceptionally nice.

No matter, I'm an engineer, I solve problems - right?

So with a spare hour I set off to build a solution to help ease my paranoia of the house getting broken into while we are gone.

This is the result of a super simple webcam recording security solution for Linux based operating systems.

Things you will need.



  1. A webcam

  2. Linux OS (I used xubuntu 14.04)

  3. streamer software

  4. Dropbox



After gathering the materials first thing you will need is to install streamer, the software we will be manipulating to record our surroundings.

sudo apt-get install streamer

After that create a DropBox account if you do not already have one and install it on your system. The default install directory is set to your home directory.

You can simply install the deb or follow the command line guide here for installation.
Dropbox install for linux

I chose to do this step because I figured if someone was in your house they will probably take the computer that is performing the recording and if you store them locally the effort is meaningless.

Once you have everything installed create a new directory under ~/Dropbox or the path where you installed. I named mine "security" this will sync up to your dropbox for review anywhere.

Lastly the brains of the operation, a super tiny, simple "script" that does the work. Copy the code and save as a shell script, I named mine "monitor.sh"



#!/bin/bash

#change to your dropbox directory you created to store pics
cd ~/Dropbox/security;

#continuous loop 
for (( ; ; ))
do  
    #streamer software takes pics and stores chronologically
	streamer -c /dev/video0 -o "$i".jpeg
	#this just waits 20 seconds before running again
	#3 pics per minute, can change to however long you want  
	sleep 20
done


There you have it. Now you just call the script and it will run.

./monitor.sh

Now anything that shows up in that directory will be synced to your drop box, and while you may still get robbed, at least you have something to show police and hopefully some form of justice.




Thoughts for improvement.


  • Get off of Dropbox - upload to a remote server instead

  • Set this up on multiple raspberry PI's with wifi to monitor outside entry points

Profile

jpnix: (Default)
JPNIX

January 2023

S M T W T F S
1234567
8910111213 14
15161718 192021
222324252627 28
293031    

Syndicate

RSS Atom

Most Popular Tags

Style Credit

Expand Cut Tags

No cut tags
Page generated Oct. 7th, 2025 11:27 pm
Powered by Dreamwidth Studios