There are times when it is nice to be able to print something remotely--where remote can mean from a cell phone to a printer across the room when you visit an associate‘s home office, or to a printer 1000 miles away. The article that follows describes two possible solutions for remote printing through the cloud that can be useful in different situations. The first solution described, Epson Connect, is perhaps the easiest to configure and requires no special software on your cell phone or laptop--one of the interfaces is just to email a document or photo to a special email address. The second service described is Google Cloud Print, which is much more general that works with printers from many manufacturers. The article is divided into the following sections:
- Configuring an Epson WF-2540
- Configuring and Using Epson Connect
- Configuring Google Cloud Print to Share a Printer
- Printing to a Google Cloud Printer from within Chrome
Configuring an Epson WF-2540
The printer for this example is an Epson WF2540, an all-in-one scan-print-fax machine that is relatively inexensive. It has Wi-Fi connectivity and supports both Epson Connect and Google Cloud Print. Configuring the printer for these services is relatively easy, but there are some glaring security holes that really require that don't use your primary Gmail user ID to configure Google Cloud print.
- In the operator panel, scroll through the settings until you find the IP address of the printer.
- On a computer connected to the same network, put in the printer IP address into URL window for the browser to get to the administration panel for the printer. This is really where the security exposures are--there is no admin credential needed to get to the administration panel, so anyone with access to the Wi-Fi network could reconfigure the printer.
- On the admin menu, go to the configuration menu options for Epson Connect or Google Cloud print as appropriate.
- If you want to forward scanned documents, configure the email and/or Cloud storage accounts for the scanning subsystem.
Because the security of the device is unknown, you should use a Gmail (or cloud storage)account and password that you can abandon--if someone with access to the WiFi network is able to penetrate the printer‘s embedded web server and access passwords (there is now way to tell how well protected they are), you don't want to lose control of your primary Gmail account.
Configuring and Using Epson Connect
As part of the setup on the printer, you will create an Epson Connect user ID. Once you have created the user ID, log and configure configure the various cloud printing features. The login and printer status screens are shown in Figures 1 and 2. The first step is to look at or perhaps change the email address for this printer as shown in Figure 3. Once you are happy that the email address is one that people will remember, go to the “Approved Senders List” menu item under the Email Print section, and add the email addresses of authorized users as shown in Figure 4. If you want to use the printer to scan and email to someone or scan to a cloud storage account, configure that under the Scan to Cloud menu section as shown in Figures 5 and 6.
Configuring Google Cloud Print to Share a Printer
Google Cloud Print is a much more general printing solution where the only printing interface is as a traditional printer--there is no email address for a printer defined to Google Cloud Print. That said, Google Cloud Print will support traditional printers that don't have any special software, which is a huge benefit for most individuals and organizations. For security reasons, you will probably want to create a new Google ID for your printers, since the user ID and password will have to be configured on the printer, and isn't really any way to know how securely the software in the printer stores your credentials. To define a cloud printer, log in to Google and go to https://www.google.com/cloudprint. From there, click on the “Try it now” button in the lower left corner, as shown in Figure 7.
- You will see a list of Google Cloud Printers defined on your account as shown in Figure 8--Fed Ex Office and Google Docs are standard destinations that are part of the service. You can add printers through the dialogs for cloud-ready and classic printers on the left.
- Select the “Manage printer” button highlighted in the lower right corner.
- You will see a list of printers like the one in Figure 9. Highlight one to share with your main user ID and other users and press “Share” as shown in Figure 10.
- Figure 11 shows the panel where you will see a list of Google users that are authorized to print to your cloud printer. You can add additional Google users in the box at the bottom and press “share”
Printing to a Google Cloud Printer from within Chrome
Printing to a Google Cloud Printer can be easy or infuriating. Anything that you can view in Chrome browser is easy to print on a cloud printer To print from other applications, you will need additional software.
To print within Chrome browser
- Log in to Google
- Select “Print” for the page that you want to print
- Select the “Change” button for the printer destination as shown in Figure 12
- Select one of your Google Cloud Print printers as shown at the bottom of Figure 13.
- Written by Bruce Moore
- Hits: 10804
Setting Up a Network of Security Cameras with Recycled Equipment
Setting up a security camera system for a home office or small business would be costly if one were to use commercial grade systems, but an effective system can be put together quite inexpensively using recycled or repurposed hardware. The article that follows describes three security camera solutions for three different needs using repurposed equipment:
- A Simple Live-camera Security Camera Solution Using Unused Android Cell Phones. This example is intended for a simple real-time video solution--perhaps to monitor the front door so that you can determine whether or not to answer when someone buzzes the door.
- An Email-based Security Camera Solution Using Obsolete Web Cameras. This example is intended for checking things like sump pump operation, HVAC operation (aim the camera at a thermometer), checking for snow removal, or determining whether or not the cat sitter is feeding the cat while you are on vacation.
- A Secure Copy (scp command) Based Remote Storage Solution. This example is intended for an intrusion deterrence and investigation application where you need to have frequent video pictures to determine whether or not some one entered a restricted space, when they entered, and what the person did.
A Simple Live-camera Security Camera Solution Using Unused Android Cell Phones
Most cell phones made over the last five years have cameras that are more than sufficient for security video and a cell phone typically draws very little power--an advantage for any device that will be powered on 24 hours a day. There are several Android applications that allow you to use an old cell phone as a security camera with little configuration or work. IP Webcam is one example. To use this as a live feed accessible from the Internet, you would need to set up dynamic DNS and port forwarding on your router. Setting up dynamic DNS conflicts with the Terms and Conditions of some Internet Service Provider agreements, so check your agreements before configuring this type of arrangement.
An Android cell phone could be used as one of the security cameras in the solutions described in subsequent sections in place of the obsolete AirLink 101 camera that is described.
An Email-based Security Camera Solution Using Obsolete Web Cameras
When they were being discontinued a number of years ago, I bought several Airlink 101 AIC-250W WiFi security cameras. It can email a photo on a schedule or upload to an FTP site on a schedule. This served my needs until Verizon blocked port 25--the standard port for email servers. The Airlink device is hard coded to use port 25 (and WiFi channel 11), so I couldn't use it anymore without some changes.
I first looked at doing port translation, but the routers that I use only offer inbound port translation (port forwarding), and my current D-Link router isn't supported on DD-WRT yet.
I next looked at using the FTP function on the Airlink to copy security photos to web server. The most straightforward solution would be to transfer the files to a web server outside the firewall where I could log in to check, but the Airlink device only supports FTP, which sends password information in the clear text and transfers files unencrypted. Security camera photos need to be stored securely, so I needed to look at some intermediate server that would allow secure transmission outside the firewall.
I had an old Western Digital Mybook World Edition II Network Attached Storage (NAS) device that was now too small to be used as a backup device. Internally, it runs a stripped down version of Linux; there is a strong community that maintains add-in tools for the device that change it from a NAS device to a fairly full-featured low-power server. It should be noted that installing these tools voids any warranty and can render it unusable, but the device was unused, so accidentally turning it into a brick would not have been the end of the world.
Using the NAS as a collection server resulted in a fairly flexible security camera configuration that works well. The approach described is not restricted to the Airlink and the WD NAS device--you could just as easily use another Linux-based NAS device or much more easily a Raspberry Pi.
The example that follows discusses the “kitty cam” portion of the security camera network; the primary purpose of this portion is to provide a convenient way to verify that the cat sitter is stopping by to feed the cat while we are on vacation (the camera is pointed at the food bowl). The security portion of the network uses much the same approach but has different settings and off-site storage so that if an intruder steals the NAS device, we still have camera footage. All devices are connected to UPS devices so that they continue to run in a power outage.
Airlink 101 AIC-250W
The AirLink 101 AIC-250W Webcam was sold about a decade ago and was an inexpesive camera at the time. It supports wired Eithernet and 802.11g WiFi connections, has a maximum resolution of 640x480 and will send photos as email or FTP. It came with a Windows application that allows you to view and manage multiple cameras that are on the same subnet. Support was dropped almost immediately after manufacture as the manufacturer moved on to new products. The firmware restricts WiFi to channel 11, and email to port 25, which is now routinely blocked by most ISPs as an approach to reduce email spam.
Configure Camera for FTP
After getting the Airlink to connect to the Wi-Fi network, the primary set up is on the Configuration->Upload page shown in Figure 1. The FTP address, port number, user name and password are configured on the top portion. Because FTP is not a secure protocol, you should define a seperate user for this so that if the ID is compromised, the intruder won’t gain wider access to your network.
For the schedule operation for the kitty cam, I set up the camera to take a photo every 600 seconds from 7:00 AM until 7:00 PM--the time during which the cat sitter would most likely refill the food bowl.
Western Digital Mybook World Edition II Network Attached Storage (NAS)
The Western Digital Mybook World Edition was an early entry in to the Network Attached Storage market. It came in "blue light" (I) and "white light" (II) ) versions and offered a free lifetime subscription to MioNet, a service that allows you to access the drive from outside your firewall. The device firmware is based upon Linux, and there is a significant community of users who have compiled firmware updates to provide additional functionality. Updating the firmware voids the warranty, and can disable the device, but as firmware modifications go, this is perhaps one of the easiest devices to modify without damaging the device as the procedure is based upon the addition of programs rather then the total replacement of the firmware as is the case for many other devices.
Configure WD Mybook for FTP
To use the Western Digital (WD) NAS device, the first step is to configure the FTP service, as shown in Figure 2. I would normally change the default port, but the AirLink devices didn't work on the 8000-8999 range that the WD NAS supports.
After you have turned on FTP, you will need to create a user ID (and password) that matches the user ID that you set on the AirLink camera. Figure 3 shows the User setup screen on the WD NAS device.
Configure SSH on WD Mybook
The next step in setting up the WD NAS is to configure SSH to allow you to access the command line and the Linux operating system on the WD NAS. Figure 4 shows the screen where you turn on SSH access. You should immediately log in and change the password from the default “welc0me” to a secure password using the commands shown in Figure 5.
Alternative Setup using MioNet
At this point, you could install and use the MioNet software that is part of the stock WD NAS device. When I installed the MioNet software on my laptop, it wouldn't boot, so I decided that MioNet would not be part of my solution.
Install Optware on Mybook World Edition
A community of users has ported a large number of utilities to the WD Mybook via the “Optware” suite of packages. The installation instructions are available on http://mybookworld.wikidot.com/optware and won't be repeated here. To set up the capabilities for email, you will need to install install Optware, but recognize that this will void any warranty and may permanently damage the device if you mess up.
Install mutt, msmtp, cron and zip
After you have installed Optware, you will need to install the optional packages for mutt, msmtp, cron and zip using the command (run this under root):
/opt/bin/ipkg install mutt msmtp cron zip
Copy Certificate Authority Certificates to WD NAS
To protect against man in the middle attacks on the email that you send, you should verify the trust signature of the email server that you are using. To do this, you will need to provide the SSL certificates of the Certificate Authority (CA) that issued the certificate for your mail server. All of the major operating systems and web browsers update root CA certificates as part of their normal maintenance stream. WD does not have or update these as part of firmware updates, so you will need to provide them from some other source. The approach differs depending upon the environment that you are using for your primary workstation. Linux is by far the easiest for this operation.
On Linux, these are found in
/etc/ssl/certs/ca-certificates.crt. To transfer these to your WD NAS, use the commands shown in Figure 6.
On Windows, you will need to use the
certutil program to export the root certificates.
On OS X, you will need to use the Keychain Access program found in the Applications->Utilities folder.
To setup the msmtp package, you will need to create a .msmtprc file in the /root directory with contents as shown below, where you have substituted your information. The password field is unencrypted, so this file should have permissions of 600. You should use an email ID that is used only for your security camera, so that if this is comprized, you won’t lose your primary personal email ID. The
tls_certcheck off directive tells msmtp not to verify the certificate of the email server and leaves this installation open to a man-in-the middle attack. You can extract the root certificate for your email server and specify that so that the msmtp client will verify the identity of the email server.
# Set default values
# Set values for mss account
# Set default account to use for sending
account default : account_alias
If you have problems with your msmtp client authenticating with the email server, comment out the
tls_trust_file line and uncomment the
tls_certcheck off line. This will disable authentication of the server and leave you open to man in the middle attacks, but it will allow you to get everything else working.
The next step is to configure mutt, the package that sends the email. The first line tells mutt to use the msmtp package to send mail, and then gives the location of the msmtp profile that we created in the previous step.
set sendmail="/opt/bin/msmtp -C /root/.msmtprc"
Write Script to Send Email
Next, write a short script to zip some of the photos from the security camera and email them to a list of users. The script first does a
cd to the directory with all of the The parameters for the
mutt command are as follows:
- -s gives the subject line for the email
- -c gives a list of the destination email addresses
- -F gives the mutt profile path that we created in the previous step.
- -a gives the name of the file that we are attaching.
- < directs the email body text from the file /shares/kitty/msg.txt
chmod 640 *.jpg
#tar -czvf photos.tgz kitty_cam$TODAY*.jpg
/opt/bin/zip photos.zip kitty_cam$TODAY*.jpg
Finally, it is time to set this up to send an email on a schedule using a
cron service. There is a cron service that is installed as part of the normal WD NAS firmware, and a separate one that is installed as part of the Optware software. I was unable to get the normal cron service to work but was able to get the Optware service working by following the directions in this article about crontab. The key step is that you use the following command to update the crontab:
If you omit the
/opt/bin path, you will get the normal WD NAS installation of crontab which points to a crontab file that does not exist.
I set up my crontab to run the
mail_photo.sh script written in the previous step run at 59 minutes past the hour from 7:00 AM to 7:00 PM, and to mail all of the photos taken in the previous hour:
# ---------- ---------- Default is Empty ---------- ---------- #
59 7-19 * * * /root/mail_photo.sh > /shares/kitty/mail.log
When you are done, make sure to change all of the permissions on the files that you have created in the /root directory to 700, to prevent access to passwords from users other than root.
A Secure Copy (scp command) Based Remote Storage Solution
The example shown above is for a very simple security situation--just making sure that the cat sitter is stopping by each day. For intrusion deterrence and investigation, you would set up the camera to take a photo every few seconds, and upload the information to an offsite server that stores days or weeks of video. For this, you would use the
gpg command to encrypt your camera files, the
scp command to transfer them to a server, and a cron job that runs once per minute.
Calculate Storage Requirements
Before setting up this type of arrangement, make sure to estimate the storage requirements and set the video quality and frequency appropriately. At the highest resolution (640x480), the AirLink camera generates images that are about 32K in size. One per second would result in about 2.7 Gigabytes per day for both file transfer and storage. For one month, this would be about 83 Gigabytes of file transfer and storage.
Install GNUpg For Encryption for Linux
OpenSSL is installed by default, and works well for symmetric key encryption and S/MIME certificate based encryption, but it does not work well for encrypting large files with public keys that are not certificates. GNUpg works much better for this. To install it, use the command
/opt/bin/ipkg install gnupg coreutils
To configure it, you will need to generate a key and export it on your main workstation. This is the key you will use to decrypt the files. Remember the password. The commands below will work for the GNUpg available on Linux, OS X Macports, and Windows Cygwin.
The first command will prompt for your name and email address, while the second command will export your public key. Next, you will generate a private key that will only reside on the WD NAS. Use the list-secret-keys option to identify the secret key that will be used only on the WD NAS:
gpg --list-secret-keys gpg --export-secret-keys -a 1234ABCD > secret.asc
Now, copy the keys to the WD NAS. On the WD NAS, you will need to import the key:
For the private key use
gpg --allow-secret-key-import --import secret.gpg.key
gpg2 --import yourkey.gpg
Note that the command on the WD NAS using Optware is
gpg2 instead of
Create ssh Keypair for Secure Copy (scp)
For the secure file copy to work, you will need to generate an ssh keypair using the
ssh-keygen command on the WD NAS:
~/.ssh # ssh-keygen -t rsa -f id_rsa Generating public/private rsa key pair. Enter passphrase (empty for no passphrase): Enter same passphrase again: Your identification has been saved in id_rsa. Your public key has been saved in id_rsa.pub. The key fingerprint is: e4:e5:62:af:45:42:c3:b9:3e:d3:6d:37:9e:48:71:4d root@backup ~/.ssh #
Next, upload the public key file
id_rsa.pub to the
.ssh directory of the account where you want to store the video files:
Finally, on the remote server, concatenate the public key to the
cat id_rsa.pub >> authorized_keys
Script to Encrypt and Upload
The scripts that follow assume that all video is stored on the WD NAS via FTP in the same way that it was stored in the previous example. For this script, we want to combine all files generated in a minute, encrypt them and securely transmit them to the server.
# Script to encrypt and send video files to remote server
# When the AirLink 101 FTPs files, the have 755 permissions. Use this to determine which files
# have been transmitted and which have not.
# Create a list of files that have permissions indicating no transmission and zip them.
# Note that files that come in during zip activity won't get transmitted by this script.
ls -l /shares/security/*.jpg | grep ^-rwxr-xr-x | cut -b 56-150 | sed -e 's/ //' | tee chmod_list.txt | zip security_$TODAY.zip -@
cat chmod_list.txt | while read X; do chmod 640 $X; done;
echo "Completed zipping"
# Encrypt the file
# --batch and --homedir are required to run the script under cron
echo "Completed encryption"
# Copy the file to the remote server
# -- this requires previous set-up of public key access to ssh
echo "Completed sending"
# Erase working files and change permissions that are used to determine what has been sent.
The series of piped commands
ls -l /shares/security/*.jpg | grep ^-rwxr-xr-x | cut -b 56-150 | sed -e 's/ //' | tee chmod_list.txt | zip security_$TODAY.zip -@
creates the zip file to be encrypted by making a list of files that have the permissions that are left after FTP (ls and grep), cuts out the file name (cut), removes blanks (sed), creates a file list that will be used for
chmod (tee) and zips up the files in to a single file for encryption. The
chmod command changes the file permissions to the permanent storage permissions.
The gpg command requires the
homedir parameters to work as a cron job. If this were run from the command line with the full set of environment variables and access to stdin, this would work without these two parameters. The
--recipient parameter is used to look up the public key installed previously.
The final commands remove the working files.
To edit the crontab, remember to use the
/opt/bin/crontab -e command to get the Optware version of the crontab command.
This application will require a cron job that runs every minute all day, every day, so the crontab should look something like this:
# ---------- ---------- Default is Empty ---------- ---------- #
* * * * * /root/security_camera.sh > /root/camera.log
Decrypting the Video
To view the video, you will need to download it to your primary workstation or another workstation where you have the private key installed. To decrypt the files use the command
for FILE in *.gpg; do gpg --output "`basename $FILE .gpg`".zip --decrypt $FILE; done;
- Written by Bruce Moore
- Hits: 5381
Configuring Server Side Page Caching in Joomla
- Google PageSpeed Webmaster Tool
- Native Joomla Caching and Compression
- Using Akeeba Admin Tools to Write .htaccess
- Using the JCHOptimize Extension to Compress, Minify, and Merge
- Using the JBetolo Extension to Compress, Minify, and Merge
- Image Compression with optipng
- Image Compression with jpegtran
- Client Side Caching
- Google Analytics Timing Results
Google PageSpeed Webmaster Tool
The article that follows is shows how to use Joomla settings and the JBetolo extension to improve page loading time and in particular, to improve the Google PageSpeed score for a page. PageSpeed is a utility for analyzing the way a page is optimized to reduce the time it takes to render the page in a browser. A good PageSpeed score is helpful in getting a good user experience in many cases. For a low volume web site where the server cache expires before every access, turning on the various settings for caching can actually slow down the render time as the server rebuilds the cache before serving the page. In this low volume case, getting a good PageSpeed score for your site will actually hurt your render times. You will have to experiment to determine what works best for rendering your web site.
The PageSpeed is located at
To use PageSpeed, just enter the URL of the page that you want to analyze. All of the examples in this article are from the page
https://www.mooresoftwareservices.com/loan-pricing/9-effective-yield-loan-fee-amortization on this site. The page contains basic HTML, tables, and calls to MathJax to render mathematical formulas, but it does not contain much in the way of images. Figure 1 shows the Google PageSpeed rating for this page with no compression or caching enabled.
The Google PageSpeed algorithm rewards a number of things, that generally fall in three categories for items that can be improved via Joomla and JBetolo configuration:
These three categories will be described in the following sections.
Minifying a file is the process of removing unnecessary characters from the file, typically spaces, line end characters, comments and other characters that are not needed to for correct execution of the file. A minified file does not need to be “un-minified” by the client browser. Minifying generally does not break things on a Joomla web site, but it does take server processing.
Compression is the process of reducing the size of the file using a compression program. Unlike a minified file, a compressed file DOES need to be uncompressed prior to execution by the client browser. Compression generally does not break things on a Joomla web site, but it does take server processing.
Native Joomla Caching and Compression
Joomla has a basic built-in compression and caching capability that is implemented by setting Joomla configuration variables in the
configuration.php file. Joomla does not have a native merging capability. The Joomla native caching has one major drawback--all article retrievals from cache will not be reflected in the hit counts for the articles. In the Global Configuration panel shown in Figure 2, setting the file type and cache time will set the cache variables as shown Figure 3:
Similarly, the native setting for GZIP compression is on the system panel as shown in Figure 4.
For my web page, the native Joomla cache and compression settings didn't really improve the PageSpeed score and that much and broke the Joomla internal hit counters for articles so that I didn't have accurate information on which articles were getting traffic. At this point, I started looking for extensions that would handle the caching and which would maintain accurate hit counts for all of my articles.
Using Akeeba Admin Tools to Generate .htaccess
Akeeba Admin Tools have the capability to generate a .htaccess file that will do a great deal to secure your Joomla web site, but which will also set up the compression and most of what you need to get a good score on Google PageSpeed. In the section Custom .htaccess rules shown in Figure 5, add the local caching rules listed in Figure 15.
Using the JCH Optimize Extension to Compress, Minify, and Merge
- Basic JCH Optimzie Configuration
- Getting in-Article Google Trends Scripts to Work Properly
- Getting MathJax Formulas to Render Properly
- Getting Google Analytics to Work Properly
- Getting Template Images to Work with Lazy Loading
- CSS Delivery
Basic JCH Optimzie Configuration
Getting in-Article Google Trends Scripts to Work Properly
The next section discusses the changes necessary to get MathJax working properly.
Getting MathJax Formulas to Render Properly
The next section discusses how to use the advanced settings to get Google Analytics to work.
Getting Google Analytics to Work Properly
Many Joomla sites use Google Analytics for search engine optimization (SEO) analysis, and use the sh404SEF to manage the insertion of the code to call Google Analytics for visit logging. To get this to work, you will need to exclude this script as shown in Figure 11. To test this, you will need to go to the “Real Time” section in Google Analytics without any filters for your home location and then open a browser that does not disable tracking to force activity in Google Analytics. If everything is working properly, you will see your browsing activity in the Real-time section of Google Analytics.
The next section shows how to enable lazy loading for images.
Getting Template Images to Work with Lazy Loading
If you have articles with a large number of screen captures or other images (like this article), you may want to enable lazy loading of images to reduce server load and improve page load times. With lazy loading enabled, an image does not load until the reader scrolls down to that portion of the article. With lazy loading enabled, you may need to exclude some images that are displayed as part of the template, as shown in Figure 13.
The next section talks about optimizing CSS delivery to improve render time.
In Google PageSpeed, one of the metrics discusses reducing the amount of data that must be downloaded before the first screen can be rendered. Under the Pro Options in JCH Optimize, you can attempt to do this using the “Optimize CSS” delivery option. This option can take some significant trial-and-error testing and is probably the option most likely to break after system maintenance.
Using the JBetolo Extension to Compress, Minify, and Merge
This article was originally written to describe setting up server side caching with jBetolo, but I had repeated problems getting it to work propery with MathJax, the Joomla print/email icons (an icon font) and other things so I switched to the paid JCH Optimize plugin and have had a much easier time getting that plugin to work.
Image Compression with optipng
PageSpeed will point out any PNG images that need to be compressed. For PNG images, the
optipng command line utility available from http://optipng.sourceforge.net/ will compress images and make PageSpeed happy. On Ubuntu, you can install this with the command
sudo apt-get install optipng, while on windows you will have to download and install the binary from the SourceForge web site. Figure 14 shows how to call
optipng from the command line to compress all of the files in a directory.
Image Compression with jpegtran
PageSpeed will point out any JPEG images that can be compressed. For JPEG images, the
jpegtran command line utility available from http://jpegclub.org/ will losslessly optimize the internal structures in JPEG images and reduce the size by 10-20% in many cases without loss of image quality. The command is not quite as simple as the optipng tool, as you will either need to write a script or run the command several times:
jpegtran -copy none -progressive -optimize input.jpg > output.jpg
Client Side Caching
The final step in setting up caching is to configure the client browser expiration settings for your web site. Google PageSpeed recommends setting the client side cache expiration to a minimum of one week (604,800 seconds), which works well for most content. The
.htaccess generator in Akeeba Admin tools will do this for you but if you are not using Akeeba Admin Tools, you will want a section in your
.htaccess that looks something like the example shown in Figure 15.
moonico.woff requires the
application/x-font-woff two entries in the expiration list, and may require others. This font is used for the print, email, edit and search icons and is the most difficult thing to get working reliably with Jbetolo. Without these entries, you will have a page that looks fine until it expires.
If you are not using a plugin to add the MathJax code, you can converting this synchronous script loading to asynchronous loading by adding
async to the call got rid of the red “You Should Fix” section of recommendations in the PageSpeed list for this page and improved the PageSpeed score by one point. The asynchronous syntax is shown below:
Google Analytics Timing Results
For a 30 day period where the no caching was enabled but compression was enabled (PageSpeed score of 67 for example page), Google Analytics reported an average load time of 5.12 seconds. Average load times with merging, minification and compression turned on in JCH Optimize will be added at a future date, but appear to be under 2 seconds.
- Written by Bruce Moore
- Hits: 9980
Wiping a Disk Drive Prior to Recycling
A neighbor recently asked me what to do with some old hard drives before sending them to recycling. The article that follows gives the procedure that will wipe a disk drive in a secure way prior to sending a machine out for repair, donating it or recycling it. You should read the documentation for each method to determine whether or not it meets your information security needs: no one really knows what will the NSA can do and other state-funded intelligence agencies can do, but these methods should be sufficient for most people.
If you have a lot of disks to wipe and they are SATA drives, you can get a USB attachable disk drive duplicator so that you don't have to open up a machine each time you need to change drives. You should allow 2-3 hours per 100G of space on the drive for a 3-pass method, depending greatly on the speed of the machine that you are using; on a machine less than five years old, this will be I/O bound by the speed of the drive and USB connection, but on older machines, it will be CPU bound and will take days on a really old machine. If you are in a hurry to donate a laptop or netbook, it may be fastest to remove the drive from machine, use a fast desktop to wipe it, and then put it back in the laptop.
The first three sections of this article are aimed at donating drives that have been removed from a machine or USB drives. If you are donating a whole machine, you can create a bootable Ubuntu Linux disk, and wipe the machine from that disk as described Standalone Ubuntu Bootable DVD.
Before wiping a drive, make SURE that you have backups of the drives on your computer that you intend to keep and that you are wiping the drive that you intend to wipe, as you can easily wipe the wrong disk if you are not careful. The article is divided into the following sections:
I have not used any of the Windows programs; I have a Linux machine and it is much easier to do there. On Windows machines, perhaps the preferred way to wipe a disk is to use one of two utilities provided by Microsoft:
- SDelete available from Microsoft Technet
- Diagnostics and Recovery Toolset. Unfortunately, the toolset isn't really available to individuals or small businesses, as it requires a volume licensing agreeement.
On OS X, there are two free alternatives. One is part of the disk utility in recent versions of OS X and the other is available in the MacPorts utilities.
To erase a disk using the Disk Utility, select the drive and select the "Erase" tab as shown in Figure 1, and then select "Security Options" which will bring up the dialog in Figure 2. The default option is not to wipe the drive. As you move to the write, the options increase from 1 to 3, and finally to a 7 pass wipe of the drive. The screen captures are taken on OS X Yosemite (10.10.1) and will look different on earlier versions of the utility. Earlier versions of Disk Utility did 1, 7 and 35 passes.
MacPorts bcwipe Tool
If you have MacPorts installed, bcwipe will give you a command line utility for securely wiping a disk, and will give you a 35-pass option if your version of OS X does not have a 35 pass option. Perhaps the biggest advantage of
bcwipe is that it can be run as a batch script--perhaps in a cron job--to erase unused space on a regular basis.
To find the disk device name to pass to
to get the filesystem name, which will look like
/dev/disk1s2 which resided on the
/dev/disk1 device name that would be given to
bcwipe to wipe the entire drive.
On Linux, it is useful to know the commands to unmount a USB drive and detach the USB drive as two steps:
sudo apt-get install udisks
sudo udisks --unmount /dev/sdb1
sudo udisks --detach /dev/sdb
Substitute your drive letter for the “b” in sdb1 and sdb. Before wiping a drive, you will need to unmount it, but not detach it.
There are at least two alternatives on Linux. Two that I have used are listed below.
Wipe is one of the earliest tools available on Linux and has perhaps the most useful write-up on how disk drives work and on the security aspects of donating and recycling disk drives. It does not have some of the more recent standards-based wipe protocols. To install and use it, issue the following commands:
sudo apt-get update
sudo apt-get install wipe
wipe -q /dev/sdx
Scrub was written at Lawrence Livermore National Laboratory and implements the scrub policies for many government organizations. If you need to meet a particular standard, this is probably the easiest way to comply with a particular standard.
sudo apt-get update
sudo apt-get install scrub
For a basic wipe of the disk that will meet the policy for many government agencies, use the nnsa
scrub -p nnsa /dev/sdx
If you want something more secure, the following will do a 35-pass wipe, but recognize that it may take a couple of days:
scrub -p outmann /dev/sdx
Standalone Ubuntu Bootable DVD
If you are donating a machine, and do not want to remove the drives to wipe them, you can create a bootable CD and run the process from there. Be forewarned that by the time you are ready to donate a machine it is old and slow--running a wipe on an old machine may take a couple of days. Use the following steps:
- Download an ISO image of Ubuntu from Ubuntu Download site. Choose the 32-bit version, since it will run on everything.
- Burn the ISO to a bootable disk using the disk burner of your choice. Instructions for burning a DVD are available for both Windows and for OS X.
- Boot the machine to be wiped from the DVD. If the machine does not boot from the DVD, you will need to change the boot order in your BIOS. This will require pressing a particular set of keys while the machine is powering on to bring up the BIOS settings utility. Search on your manufacturer and model number to find out what keys to use.
- When you have booted from the CD, choose the option to run Ubuntu from the CD rather than install it. Make sure that the machine is connected to the Internet.
- Once you have booted to a desktop, open a terminal window and issue the following commands:
sudo apt-get update
sudo apt-get install scrub
scrub -p nnsa /dev/sda
- If the machine has multiple drives, you can start multiple terminal sessions and run the disk wipes in parallel. To get the disk names, use the command
lsblkand select the three letter device names at the top of the hierachy.
This will take a few hours
- Written by Bruce Moore
- Hits: 24245
Sales and Lead Management with SuiteCRM
For a small business doing business to business (B2B) marketing and sales, it is important to keep track of all contacts with customers and potential customers. There are many single-user contact managers that provide this capability, but if the business is to grow, it will be important to be able to hand off accounts to different account managers as the business grows. Splitting account level information out of a single-user contact manager can be difficult or impossible.
A small business doing B2B marketing needs many of the same Customer Relationship Management (CRM) capabilities of a big business, but at a significantly smaller cost. There are a number of cloud-based solutions such as Sales Force, but they can be expensive. Fortunately, there are some very good open source alternatives that are available for download and installation on your web server, or for subscription. The article that follows describes SuiteCRM one of the alternatives for an open source CRM system.
SuiteCRM vs. SugarCRM--Open Source Forks from the SugarCRM Code Base
SugarCRM is an open source CRM system that has been around for several years and which is quite mature. For a variety of reasons that can be quickly found with a Google query of "sugarcrm vs. suitecrm," a firm called SalesAgility forked a copy of SugarCRM and released it as SuiteCRM. I first installed the Community Edition of SugarCRM--or more correctly, I attempted to install the Community Edition of SugarCRM. I couldn't get it to install correctly. I then downloaded and installed SuiteCRM and was able to configure SuiteCRM relatively quickly and painlessly.
By going the route of a local installation, I have a lot of flexibility as my business grows and I add or contract out various functions--most likely sales. With SuiteCRM I can take the following routes:
- Set up a virtual private network (VPN) that allows a sales rep to get to my SuiteCRM server on my local network.
- Put SuiteCRM on a public web server where a sales rep can log in.
- Upgrade to the paid support version of either SuiteCRM or SugarCRM.
- Export data from my local system and load it onto SuiteCRM’s subscription cloud service.
- Export data from my local system and load it onto SugarCRM’s subscription cloud service.
- Export data from my local system and load it onto Salesforce or another cloud-based CRM system.
By going with an open source web-based product, I get the functions that I need, and a clear growth path without the start-up costs of a commercial subscription. The next sections describe how to set up SuiteCRM.
Setting Up and Using SuiteCRM
On Linux systems, setting up SuiteCRM is really quite easy, and fits into three major steps:
- Loading Account List
- Loading Contact List
- Setting up Crontab for Batch Jobs
- Geocode Addresses
You will need to install Apache, MySQL and PHP using
rpm as appropriate for your Linux distribution. Next you will need to download the SuiteCRM zip file from SuiteCRM.com and extract it into the
/var/www/suiteCRM directory or whatever directory your web server requires, and then follow the complete instructions are given in the SuiteCRM 7.1 Installation Guide.
SuiteCRM has the capability to send emails internally; if you want to use this feature, make sure to have your email server information available to configure during installation. You can configure this at a later date if you want. You will need
- Server name (for example
- Port number (for example
- User ID for marketing emails
You will probably also want to have a copy of your company logo as a
.png file to replace the default SuiteCRM logo.
Loading Account List
Once you have installed SuiteCRM, you will want to load a list of accounts or potential accounts. In some industries, this is easily done with a download of license or incorporation data from a government web site. In other cases, you will have to pay for a list. In any case, you will want at least the following information:
- Business name
- Business address, broken out by street, city, zip and country
- Business phone number
- Annual revenue
- Phone number
SuiteCRM will take much, much more. For a complete list, download the Import file Template:
- Log in and go to the “User Name” in the upper right corner and select “Admin”.
- Under the “System” section, choose “Import Wizard”.
- Select “Accounts” and begin the import process.
- Immediately above the file selection
Use the import template to organize your date and the correct column structure. There is a mapping capability, but it is much easier to get your data in the columns that SuiteCRM expects. Once you have your data in the correct column structure, repeat the import process and but this time select your file and go through the remaining part of the import dialog.
Loading Contact List
Loading the contact list is quite similar to loading the account list. First, you must export from your existing contact manager. If you have an option, choose to export in the CSV format for Microsoft Outlook, as this will require less rework than other export formats. If you have contacts at accounts that were loaded in the previous step, try to standardize the company names for the contacts before you import them, as this will make searching easier.
Set up Crontab for Batch Jobs
To get SuiteCRM to run geocoding, email campaigns and other batch processes, you will need to set it up to use crontab, the batch scheduler in Linux and Unix. On windows, you may be able to do this with the crontab built into Cygwin. This step is poorly documented as far as I can tell. To set up the crontab for SuiteCRM on Ubuntu, you will need the commands listed below. The path
/var/www/suitecrm/ will need to be changed to fit the path to the location where you have installed SuiteCRM, as will the user ID if the web server does not run as
$ sudo su www-data
[sudo] password for userID:
$ php /var/www/suitecrm/cron.php
After you run this command, go to the admin page and look at the schedule; all of the jobs should show one execution immediately after you ran the
cron.php program. Next, edit the cron table for
www-data with the command
$ whereis php
php: /usr/bin/php /usr/bin/X11/php /usr/share/php
$ crontab -e
Add the following line to the crontab to run the scheduler once an hour. Modify the path
/usr/bin/php as necessary to match one of the paths found in the output to the
* * * * * cd /var/www/suitecrm; php -f cron.php > /dev/null 2>&1
Cron will now run the SuiteCRM
cron.php program ever minute;
cron.php will go through the list of jobs defined in the SuiteCRM schedule and run jobs that are overdue. The exact line to add to crontab is highlighted in grey at the bottom of the schedule panel as shown in the figure below.
Schedule Panel Showing Job to Add to Crontab.
SuiteCRM has a built-in capability to call the Google Maps API to geocode all of the addresses in your system. Geocoding addresses makes it easier to plan routes and schedule sales calls in the same geographic area. Since Google limits API calls to 2500 per day for a given IP address, so SuiteCRM will batch the geocoding over several days until they are all done. To geocode your addresses,
- Log in and go to the “User Name” in the upper right corner and select “Admin”.
- Under the “Google Maps” section, choose “Google Maps Settings”, and make sure that you geocode the address field that you used for the data import. The Google Maps section of the admin screen is shown in the figure Google Maps Section of the SuiteCRM Admin Panel. Google Maps Section of the SuiteCRM Admin Panel.
- Under the “Google Maps” section, choose “Geocode Addresses”. The geocoding API will generate a summary of the geocoding status, as shown in the figure Summary of Geocoding Status. To geocode everything, you will need to set up a nightly CRON job. To do this, copy the link location just below the summary in the “CRON URL” section and proceed to the next step in this list. Summary of Geocoding Status.
- On the admin screen, in the “System” section, select “Schedule an Event” and then “Create” in the upper right corner.
- Give the job a name (Nightly Geocode) and paste the URL that you copied from the Geocode Summary into the Job URL field. In the URL, replace the
http://localhost/suitecrm/index.phpor whatever is the domain name for your installation. It should look something like
- Change the Interval to hourly, and save the job. If you are familiar with crontab, select the advanced options to see the exact crontab entry. The settings are shown in the figure Setting Up the Nightly Geocode Job Setting Up the Nightly Geocode Job.
- Written by Bruce Moore
- Hits: 12101