For anyone who is interested, I have stopped blogging here and setup my own personal web site/blog now.
Please visit this site for my latest writing:
Cheers,
🙂
For anyone who is interested, I have stopped blogging here and setup my own personal web site/blog now.
Please visit this site for my latest writing:
Cheers,
🙂
Hi again Raspberry Pi fans. I recently figured out how to determine the core CPU and GPU temperatures for my new Raspberry Pi 4 by way of a shell script, and it didn’t take long before I got tired of running that manually. So I then figured out how to get the Pi’s html home web page to include that info. Here’s what that looks like now that it’s all setup:
Here’s an overview of the process first:
And here are the details (these steps assume you have Raspbian running as the operating system on the Pi but if not, you will need to adjust to use whatever text editor or other things you have)
cd /home/pi
nano checkTemp.sh
#!/bin/bash
echo ""
# get the CPU temp in Celsius as a number
cpu=$(</sys/class/thermal/thermal_zone0/temp)
# get the GPU temp in Celsius as text
echo "GPU => $(/opt/vc/bin/vcgencmd measure_temp)"
# convert the cpu temp to Fahr.
far=$(echo - | awk -v cpu=$cpu '{print cpu / 1000 * 9 / 5 + 32}')
# round that number to at most two decimals
far2=$(echo - | awk -v f=$far '{print int(f * 100) / 100}')
# convert the cpu temp number
cel=$(echo - | awk -v cpu=$cpu '{print cpu / 1000}')
# round that number to at most two decimals
cel2=$(echo - | awk -v c=$cel '{print int(c * 100) / 100}')
echo "CPU => $cel2 C or $far2 F"
echo ""
currentdatestamp=`date +"%A, %b %d, %Y %I:%M %p"`
message="Pi temp $far2 degrees F - as of "
echo $message $currentdatestamp > /home/pi/www/html/pitemp.txt
./checkTemp.sh
crontab -e
*/15 * * * * /home/pi/checkTemp.sh
nano www/html/index.html
https://ajax.googleapis.com/ajax/libs/jquery/3.3.1/jquery.min.js
function getData(){
$.get('./pitemp.txt',function(data){
console.log('got pi temp:'+data);
$('#piTemp').text(data);
},'text');
};
That JS uses jQuery’s ‘get’ function to read the text file and then jQuery’s text function to set that text into the element with the ID of “piTemp” (which is case- sensitive).
<body onLoad="getData()">
<h5 id="piTemp"></h5>
That should get you a web page which shows something very close to your Pi’s current temperature every time you refresh the page.
🙂
I just recently had to replace an older external USB hard drive connected to my Raspberry Pi (Model 3 B) with a new one. Out of the box, the new 3 TB drive came formatted with the exFAT file system.
Setting it up was easy enough and I had my music server (CherryMusic) and my Plex media server pointed at and reading the drive in no time.
I configured SAMBA next. I need to be able to copy files onto the drive from across the network, from Windows PCs. I had this setup with my previous NTFS drive so I assumed it would just work for the new drive.
No. Of course not. So I struggled and struggled with this for a few hours, until my Aha! moment.
I finally figured out that my entry in the Rasbpian OS’ /etc/fstab file entry for the external drive needed to be adjusted.
This version of that entry would allow me to read the files via Samba from other PCs but not to write to it:
/dev/sda1 /media/DRIVE exfat rw,exec 0 0
I had to add the ‘umask=0’ magic word to the options there to make it work, like this:
/dev/sda1 /media/DRIVE exfat rw,exec,umask=0 0 0
ExFAT, which is patented by MicroSoft is a good file system for external drives (USB sticks, SDD and HDD USB drives, etc…). It is supported on Windows, Macs and Linux and it supports very large files and very large partitions, so millions of files is no problem. That’s good for a multi-terabyte drive and it’s useful in a multi-OS household like mine.
BUT! It does not support user or group access control lists like NTFS does. So, in linux (Raspbian in my case), you can’t use chown to change the owner or chmod to change the file permissions. When you try, you’ll see “That operation is not supported”.
What happened then when I use my original fstab entry to mount the drive without the umask part, was that the ‘root’ user and ‘root’ group owned the drive and all of the files and folders on it. And as I said, because of the exFAT file system, that can’t be changed.
So, all of this just led to this conclusion in my head: if I can’t change who owns the drive’s top level folder in linux, and I can’t change the permissions after it’s mounted, then I need to set these things as it’s being mounted – which is where we get back to the fstab entry.
In the fstab specs, for some file systems, you are allowed to set the umask option, to indicate how user permissions should be set. In my example above, adding “umask=0” tells the OS that all users are allowed to read and write and execute anything on the drive. If you use any other number, you’re saying which operations are prohibited (as opposed to allowed).
This Wikipedia page is very helpful on this topic: https://en.wikipedia.org/wiki/Umask
So, now with umask=0 in my /etc/fstab file, I can read and write files on that drive from my Windows PC.
I hope this helps someone else down the line.
: )
I am a big Baseball fan and this weekend I realized I wanted to have a fast way to check on the current MLB standings. I already have a Raspberry Pi at home, running a web server on port 80, so I thought I’d code up a page on there which would (a) go get the standings (current ranks in each of the thirty teams in the MLB) in some simple, text format, and then (ii) format them nicely into an HTML table.
After some digging around, I found the awesome erikberg.com site and its MLB APIs. You just point some tool or whatever at a URL and it returns the data you want in the response, in JSON or XML format. I found this URL would work just fine for my purposes:
https://erikberg.com/mlb/standings.json
That returns the full set of standings data in a nice, easy to parse JSON format and it’s current. And it’s free!
I tried then to use that from inside a JavaScript, in my HTML home page, but ran into CORS issues. This was new to me, but it makes sense: modern browsers (like Chrome) will not allow code running in the client to go and get data from a site other than the one they’re coming from. My code was coming from my web server and trying to get and display data served from erikberg.com, and that’s a CORS violation. If the browsers allowed that, malicious people could figure out ways to force you download their bad payloads pretty easily.
So, I realized, I would need to get that JSON data on the server only, and save it to a file. Then my HTML/JS file could read that file (which is on the same server the HTML is running on) and that’s allowed.
So I wrote this simple bash script to accomplish the first part – getting and storing the data:
#!/bin/bash sudo curl -o /var/www/html/data.json https://erikberg.com/mlb/standings.json
That says, use the tool called ‘curl‘ to visit the named web site and store the response into a file named, “data.json”. I tested that and it works just fine. Then I used cron to setup a repeating, scheduled task so that script would be run a few times every day.
crontab -e
and then in there:
10 */4 * * * ~/getMLBStandings.sh >/dev/null 2>&1
Cron syntax takes some getting used to (but there are lots of helpful tools). That string says:
"At 10 minutes past every fourth hour of every day, run the getMLBStandings shell script, and send the output to /dev/null"
…which is just a way of telling cron that under no circumstances do I want to get any output, not even an email saying the process failed. That’s OK in this case but you should always think about whether that makes sense for your thing.
OK, so now have a scheduled task thing which will go and get that baseball data for me multiple times a day and every time it does, it will replace the older file with the new one. Sweet!
Next, I just needed a way to have my HTML / JavaScript file read that file in and convert its data to an HTML table.
Now, my page already uses jQuery, so I started there. Sure enough, that magic collection of stuff already includes a getJSON API, so I wrote this little function for that part of this:
function getStandingsData() { var dataurl = './data.json'; $.getJSON( dataurl, {}, function( result ) { console.log( 'got jSON - it has ' + result.standing.length + ' entries.' ); buildHtmlTable( result.standing, '#standingsTable' ); }); }
That says, do the jQuery ($) getJSON function, reading the local data.json file, and assuming that works, just call the next function which will convert that JSON into a HTML table. I added that call to that function into the body tag, onLoad() event and now every time the page is loaded, the data is refreshed from the file.
<body onLoad="getStandingsData()">
Sweeter!
For the JSON to HTML table part, I searched and found this StackOverload post with the answer:
https://stackoverflow.com/questions/5180382/convert-json-data-to-a-html-table
I absconded the code from one of the answers there, as you do, and added a couple of IF statements since I only want the results for the National League and for the West division, and what-do-you-know, a mostly working page.
function buildHtmlTable(result, selector) { var columns = addAllColumnHeaders(result, selector); for (var i = 0; i < result.length; i++) { if (result[i].conference == conf) { if (result[i].division == division) { var row$ = $('<tr/>'); for (var colIndex = 0; colIndex < columns.length; colIndex++) { var cellValue = result[i][columns[colIndex]]; if (cellValue == null) cellValue = ""; row$.append($('<td/>').html(cellValue)); } $(selector).append(row$); } } } }
function addAllColumnHeaders(result, selector) { var columnSet = []; var headerTr$ = $('<tr/>'); for (var i = 0; i < result.length; i++) { var rowHash = result[i]; for (var key in rowHash) { if ($.inArray(key, columnSet) == -1) { if ($.inArray(key, cols) >= 0) { columnSet.push(key); headerTr$.append($('<th/>').html(key)); } } } } $(selector).append(headerTr$); return columnSet; }
Ain’t she pretty? Pretty skookum, that is.
The buildHTMLTables function calls the addAllColumnHeaders function to do just what it says on the tin – reading the JSON, determine all the column headers from the array in there and send that back to the HTML table’s header row. Then take all the rest of the data and make that the rows in the table.
I added the if statements to limit the columns to just those I want and the rows to just those where the ‘conference’ == ‘NL’ and the division == ‘W’.
With that, I have a working web page, which looks something like this:
So that’s it for now. Next, I want to figure out how to tweak those column labels to get rid of the underscores and use proper capitalization but this is good, for just me, for now.
Cheers,
: )
Have you ever wanted to run a command on a Linux computer, from a Windows computer? If so, then you might want to know about plink. It’s part of the venerable Putty package. You may already be familiar with how Putty makes it easy to connect to any SSH server from Windows, and that it includes a set of executables beyond the Putty.exe tool. Plink is one of those. I’ll explain here how to use it and how that can be really helpful.
When might you do this? Well, for instance, whenever you want to:
Here’s the built-in Help text for the latest version of plink for Windows:
Plink: command-line connection utility Release 0.66 Usage: plink [options] [user@]host [command] ("host" can also be a PuTTY saved session name) Options: -V print version information and exit -pgpfp print PGP key fingerprints and exit -v show verbose messages -load sessname Load settings from saved session -ssh -telnet -rlogin -raw -serial force use of a particular protocol -P port connect to specified port -l user connect with specified username -batch disable all interactive prompts -sercfg configuration-string (e.g. 19200,8,n,1,X) Specify the serial configuration (serial only) The following options only apply to SSH connections: -pw passw login with specified password -D [listen-IP:]listen-port Dynamic SOCKS-based port forwarding -L [listen-IP:]listen-port:host:port Forward local port to remote address -R [listen-IP:]listen-port:host:port Forward remote port to local address -X -x enable / disable X11 forwarding -A -a enable / disable agent forwarding -t -T enable / disable pty allocation -1 -2 force use of particular protocol version -4 -6 force use of IPv4 or IPv6 -C enable compression -i key private key file for user authentication -noagent disable use of Pageant -agent enable use of Pageant -hostkey aa:bb:cc:... manually specify a host key (may be repeated) -m file read remote command(s) from file -s remote command is an SSH subsystem (SSH-2 only) -N don't start a shell/command (SSH-2 only) -nc host:port open tunnel in place of session (SSH-2 only)
As you can see, to use this tool, you type plink, then any of its options, then specify a username & a host (the Linux computer) and last, the command you want to run on that other computer.
So, a very simple example might look like this:
(path to your putty folder)\plink.exe admin@192.168.0.111 sudo reboot
That command:
TIP: Save all of the putty tools into a folder already in your Windows path and you won’t need to specify the path to the .exe each time you want to run it. Learn more about doing this here.
Having this tool in your toolbox can be really helpful when you need to do something on a regular basis, across your network, and don’t really need a full SSH session to remain open after the command is finished. Just make sure the ‘other’ computer here is already running SSH and this will work like a champ.
When I find a good use for plink, I usually save it into a batch file on my windows computer, so I can then run that .bat file from the Windows Start menu.
Here are the steps for doing that (assuming you have Windows 7 or later):
Here’s a shell script I wrote which uses the youtube-dl tool to grab the audio from any video on youtube (see my previous post for more on that).
#!/bin/bash youtube-dl -o "$FILENAME".flv $URL mv "$FILENAME".mp3 ~/MP3/YT
This shell script expects to receive two parameters each time it run, the filename and the URL of the video to be downloaded. The tool uses youtube-dl to connect to youtube, gets the video via streaming, pulls the audio from it and saves an MP3 file with that audio into the /MP3/YT folder.
And here’s the Windows batch file I created to run that command from Windows:
@echo off plink pi@myraspi yt.sh -f %1 -u %2
(My .sh file is in the linix computer’s Path so I don’t have to include it here)
You can optionally include the password for that user (‘pi’ in my example) with the “-pw” option, but that would be pretty incesure so think carefully about whether that makes sense in your environment. A more secure option would be to use plink’s “-i” option and specify a private key in place of a password.
To run your batch then from windows, just enter a command anywhere Windows allows this, (the Windows 7 or later Start field, the Run window or a CMD prompt, for instance).
With my youtube downloader script, I enter commands like this:
yt AFilename https://youtube.com/somevideourl
Give it a moment’s thought, and you can probasbly think of lots of good uses for this. Any time you want to remotely run a command on your Linux or Mac computer and don’t need a full on logon session, you can spend 5 minutes setting this up and boom! Convenient solution!
I hope you find this useful.
: )
I decided to experiment with the Google AIY or Artificial Intelligence Yourself kit. With this kit, and a Raspberry Pi computer, you can build your own (somewhat limited) version of a Google Home assistant and then hack it all you want.
For a nice intro, check out this video from Christopher Barnatt at Exploring Computers:
Here in America, you can pick one up easily at any MicroCenter store or online. I got mine for $9.99 (even though the bin in the store showed $24.99). I already had a Raspberry Pi 2B (256MB) sitting idle at home, so I decided to use that for this project, but the Google folks do recommend that you use a newer Ras Pi 3. If you go that route, the install will be somewhat simpler. With a Pi 2 Model B like mine, it works just fine, but I had to provide my own USB wifi network adapter, and do some of the setup & config myself to get everything running properly.
So, to get started with this, go get a Ras Pi, a 2.5 Amp power supply, a 4GB or larger micro-SD card, and the Google AYI kit. The kit comes with everything you need to turn your Pi into a neat listening and talking box.
If you have a Ras Pi 3, you can just follow the great instructions from Google at this page and you should be just fine: https://aiyprojects.withgoogle.com/voice/
If you are going to use a Ras Pi 2 like I did, you can mostly just follow those steps, but then I decided to not use the Raspbian operating system image offered by Google, and instead, installed the default Ras Pi Foundation image of Raspbian Stretch (the current version available) and then I added the Google software on top of that.
My steps were:
python3 assistant-library-with-button-demo.py
cp /home/pi/voice-recognizer-raspi/src/assistant-library-with-button-demo.py main.py
sudo systemctl enable voice-recognizer.service
And that’s it. Everything is working for us, the Google lady tells us a nice joke every now & then or we can ask when our next appointment is, and so on.
She can’t seem to play music from my Google Play Music library yet, but I still like her. I do want to write my own new abilities for her next, since hacking is always part of the fun. I’ll post here again with any updates on that front ASAP.
Enjoy!
🙂
Recently I realized I had uploaded some of my home movies to YouTube but no longer had local copies. These are videos I recorded of bands performing, so to me, the audio is the important part. Since I don’t trust the closed-source ‘free’ web sites you find when I do a “download youtube” search, and I don’t like installing closed-source software to my Windows machine either, I decided to see if my Raspberry Pi could help me with this. It turns out, it can.
The magic bits come from this open source project on GitHub: youtube-dl.
https://github.com/rg3/youtube-dl
The one-time setup is really simple and then you have lots of options for running the software once it’s installed. You can also explore the Options shown in the project’s ReadMe and so improve on what I’ve written here, depending on your needs.
Follow these steps to install it and get it setup:
Note: This worked just fine for me, running Raspbian Jessie (version). Your mileage may vary.
sudo apt-get update
4. If like me, you will want to have the tool automatically extract the audio out of the videos, then type this and press enter:
sudo apt-get install libav-tools
Note: libav-tools used to be known as ffmpeg. I am using it to extract audio from my video files. It’s an official Debian package.
5. When that’s done, to install youtube-dl, type this and then press enter:
sudo curl -L https://yt-dl.org/downloads/latest/youtube-dl -o /usr/local/bin/youtube-dl
6. Make the tool executable by typing this and then press enter:
sudo chmod a+rx /usr/local/bin/youtube-dl
7. Next, let’s create a configuration file so you don’t have to enter the same options every time you use the tool.
8. Type this and press enter:
sudo nano /etc/youtube-dl.conf
9. The file probably won’t exist before this, but that’s OK. You can enter any of these Configuration / Options you like into your youtube-dl conf file. For me, I just want it to always pull the audio out of the video, so this is what my configuration file has in it:
# extract the audio -x # Use MP3 format --audio-format "mp3"
Note: if you want to keep the video around after that extraction, be sure you use the “-k” option or youtube-dl will delete the file after the audio is extracted.
11. While in the nano editor, when you’re done editing your file, press Ctrl-X, then Y, then press enter, to save the file and exit nano.
12. Now that this is all setup, you can test it out. Go and get the URL for a video from youtube.com (or lots of other video web sites), and copy it to your clipboard. Then enter this command, pasting your URL at the end:
youtube-dl {any options you want to use} -o "make up some video name here.flv" {paste the YouTube URL here}
13. When you press enter on that, youtube-dl should show you that it’s downloading the video to the file name you put after “-o”, and then say it’s doing whatever else you told it to, either at the command line or in your conf file. For instance, for me it auto-extracted the audio from the video to an MP3 file and then deleted the video file.
14. One way to make this even nicer, create a shell script to shorten the command you have to enter to do step 12 above.
15. At the terminal prompt again, type this (or name the sh file anything you want) and press enter:
nano yt.sh
16. In Nano, type:
# # youtube-dl shotcut script # youtube-dl -o $1.flv $2 # insert any other commands you want to use to automate this process # For instance, you could move the file to another folder next: mv $1.mp3 ~/NebulousThinking/$2
17. Then type Ctrl-X, press Y and then press enter to save the file and exit from Nano.
18. Make that shell script executable with this command:
sudo chmod +x ./yt.sh
And you are done. Now any time you need to download one of your videos from YouTube, just jump into your Pi’s terminal and enter the command like this:
Note that this command starts with a period and then a forward slash. And the URL here is just an example; substitute with your video’s URL.
./yt.sh "My favorite YT video" https://www.youtube.com/watch?v=dQw4w9WgXcQ
…and in a minute or a few, you’ll have your video or MP3 or whatever you set up the tool to create.
You can explore all of the options made available via the youtube-dl application, including its support for authentication, subtitle handling, the “-g” option for when you only want to stream the video to another local application, requests with multiple URLs at once (for downloading playlists), and a lot more.
Enjoy!
: )
I recently had a problem: I receive a lot of emails (in Microsoft Outlook 2016) from my co-workers, and I realized I wished I could reply to them with the same, canned response.
I looked around and found this good idea, so I’m sharing it here. The solution is really more of a work-around but it’s the best and simplest option for all recent Outlook desktop program versions. (This won’t work in the web browser versions, sorry)
Enjoy 🙂
Use these commands at a terminal prompt to quickly answer questions about the Raspbian OS running on your Raspberry Pi (though some of these commands are useful with other OSs as well)
lsb_release -a
Sample command output:
No LSB modules are available. Distributor ID: Raspbian Description: Raspbian GNU/Linux 8.0 (jessie) Release: 8.0 Codename: jessie
Help for this command:
Usage: lsb_release [options] Options: -h, --help show this help message and exit -v, --version show LSB modules this system supports -i, --id show distributor ID -d, --description show description of this distribution -r, --release show release number of this distribution -c, --codename show code name of this distribution -a, --all show all of the above information -s, --short show requested information in short format
uname -a
Sample command output:
Linux raspi 4.1.17+ #838 Tue Feb 9 12:57:10 GMT 2016 armv6l GNU/Linux
Help for this command:
Usage: uname [OPTION]... Print certain system information. With no OPTION, same as -s. -a, --all print all information, in the following order, except omit -p and -i if unknown: -s, --kernel-name print the kernel name -n, --nodename print the network node hostname -r, --kernel-release print the kernel release -v, --kernel-version print the kernel version -m, --machine print the machine hardware name -p, --processor print the processor type or "unknown" -i, --hardware-platform print the hardware platform or "unknown" -o, --operating-system print the operating system --help display this help and exit --version output version information and exit
cat /etc/rpi-issue
Example contents of this text file:
Raspberry Pi reference 2016-02-09 Generated using Pi-gen, https://github.com/RPi-Distro/Pi-gen, stage4
cat /etc/os-release
Example contents of this text file:
PRETTY_NAME="Raspbian GNU/Linux 8 (jessie)" NAME="Raspbian GNU/Linux" VERSION_ID="8" VERSION="8 (jessie)" ID=raspbian ID_LIKE=debian HOME_URL="http://www.raspbian.org/" SUPPORT_URL="http://www.raspbian.org/RaspbianForums" BUG_REPORT_URL="http://www.raspbian.org/RaspbianBugs"
cat /etc/debian_version
Example contents of this text file:
8.0
New in the more recent versions of Raspbian we have this command from systemd:
hostnamectl [options]
Example output from this command (with no options):
Static hostname: raspberypi Icon name: computer Chassis: n/a Machine ID: 2e731345345df4244978f314763453451c8e Boot ID: 523423fe43876dz345876dc876823335 Operating System: Raspbian GNU/Linux 8 (jessie) Kernel: Linux 4.9.35-v7+ Architecture: arm
Thanks to the Raspberry Pi Forums, the /r/raspberry_pi community on Reddit and everyone on the Ras Pi StackExchange for helping me find these commands!
Did I misstate anything here? Do you know of any other similar commands? Please post a Comment here and help out all of us Raspberry Pi enthusiasts.
Enjoy!
🙂
I use Microsoft OneNote all day, every day, for all of my note taking, both at work and at home. For the longest time, I’ve wanted to be able to paste an image into my notes and then have a border drawn around the image’s edges, like I do in Word or Outlook emails. OneNote doesn’t explicitly support this feature though, as is evidenced by the lack of a “Format” menu ribbon when you select an image in OneNote. I am using OneNote 2016 now, by the way, but this has been a limitation for a long time, so this problem applies to older versions just the same.
The work-around I found today is pretty easy and effectively solves my problem, so I am sharing it here with you all.
The fix for this is to insert a 1×1 table wherever you want the picture to appear and then paste the image inside that table’s only cell.
Here are the steps then:
You can then format the table any way you like and so now the image has a border around it.
I hope this helps someone out there, as it’s been bugging me for a long time.
🙂