If getting the error "Metadata file does not match checksum", try running `
# yum clean metadata
`yum clean all` should also resolve the issue if the metadata fails.
BlogsMetadata file does not match checksumSubmitted by sandip on Mon, 06/09/2008 - 23:27If getting the error "Metadata file does not match checksum", try running ` # yum clean metadata `yum clean all` should also resolve the issue if the metadata fails. »
Get a count of files/folder in a directorySubmitted by sandip on Tue, 05/27/2008 - 11:33$ ls -A1 /path/to/folder | wc -l Lists out the files in a directory including hidden files in a single-column format and pipes it through a line count via wc. »
Moving files around to include hidden filesSubmitted by sandip on Tue, 05/27/2008 - 11:07Often times when moving files from one directory to another, specifically when dealing with web folders, I have missed out the all important .htaccess hidden files with just the usual `mv source/* destination` command. Here's a one liner that will include the hidden files too: $ ls -A <source> | while read i; do mv <source>/"$i" <destination>; done IP range to CIDR conversionSubmitted by sandip on Thu, 05/15/2008 - 10:19I've often had to convert IP range with netmask to a CIDR notation. Below is a quick perl script to help with the conversion: #!/usr/bin/perl -w Track files uploaded via pure-ftpdSubmitted by sandip on Mon, 05/12/2008 - 09:28Recently, I've had more than one occurrence of files being messed up due to bad uploads from users on a cpanel server running pure-ftpd. Here is a simple one liner to get a report of uploads: /bin/grep pure-ftpd /var/log/messages| grep upload | grep -v <trusted ip address> "trusted ip address" would possibly be your own. I put the above on a daily cron and keep an eye out for user uploads. »
apache internal dummy connectionSubmitted by sandip on Sat, 05/10/2008 - 14:54I've noticed these in httpd access log starting with Apache2.2: ::1 - - [09/May/2008:14:53:29 -0400] "GET / HTTP/1.0" 200 5043 "-" "Apache (internal dummy connection)" The apache server occasionally hits localhost to signal its children. See the apache wiki for more info.
Unfortunately, the homepage I host is a dynamic one and this becomes very costly during busy times. I see a large number of those internal dummy connection requests during an apache graceful restart (SIGUSR1) and at the same time the cpu load on the Apache2.2 server maxes out at nearly 100%. I do not see this cpu load during a graceful restart on apache 2.0 httpd servers. With the below mod_rewrite rule in place I was able to reduce the load by pointing http request coming from HTTP_USER_AGENT, "internal dummy request" to an empty static html page. RewriteEngine on Also, removed logging of such requests via: SetEnvIf Remote_Addr "::1" dontlog »
apcupsd rpm rebuild on CentOS-5Submitted by sandip on Sat, 05/10/2008 - 13:42Apcupsd is a daemon for controlling APC UPSes. It can be used for power mangement and controlling most of APC's UPS models on Unix and Windows machines. Apcupsd works with most of APC's Smart-UPS models as well as most simple signalling models such a Back-UPS, and BackUPS-Office. During a power failure, apcupsd will inform the users about the power failure and that a shutdown may occur. If power is not restored, a system shutdown will follow when the battery is exhausted, a timeout (seconds) expires, or runtime expires based on internal APC calculations determined by power consumption rates. I kept getting failure when rebuilding from source rpm and was able to resolve once the package latex2html was installed. Although, it did not come up with any dependency failure when trying to build the package. The required packages I had to install were: gd-devel, tetex, tetex-latex, glibc-devel, ghostscript, latex2html. »
Check service linked to libwrap / tcpwrapperSubmitted by sandip on Wed, 05/07/2008 - 11:10In order to use hosts_access (hosts.allow/hosts.deny), a service would need to be compiled in with tcpwrapper (tcpd) support and can be checked easily with the below commands. hosts_access is great as an alternative to iptables and firewall, specifically if you are hosted on a VPS with limited resources for iptables rules. # ldd `which sshd` | grep -i libwrap or # strings `which sshd` | grep -i libwrap Both the commands should echo out libwrap.so.0 which would mean hosts_access can be used for service sshd. Make sure you are able to connect to ssh, add your IP to "/etc/hosts.allow". In the below case I am using the full range of my local intranet (LAN). # Allow localhost Now to block ssh access to others, simply add the below lines to "/etc/hosts.deny". # Block everyone else from SSH Note: hosts.allow takes precedence over hosts.deny. »
Issues with receiving mail on Plesk serverSubmitted by sandip on Tue, 04/22/2008 - 09:05I was not receiving mails from a particular email address. The MX records checked out fine. The mail server was not in any of the DNSBL list I was subscribed to. There was nothing in the logs that mentioned that there was any emails coming in from the user. However, it did have a lot of relaylocks for the mail servers IP address. Digging in some more, I found a similar issue discussed at theplanet forum where the issue was caused due to conflict of timeouts and auth packets being dropped instead by the sender mail server, so I adjusted qmail timeout which seemed to push the conversation between the MTAs forward and the emails are now being accepted. I changed the default timeout from 30 seconds to 15 seconds by editing the /etc/inetd and adding -t15 as below. smtp stream tcp nowait.1000 root /var/qmail/bin/tcp-env tcp-env -t15 /usr/sbin/rblsmtpd -r bl.spamcop.net -r zen.spamhaus.org /var/qmail/bin/relaylock /var/qmail/bin/qmail-smtpd /var/qmail/bin/smtp_auth /var/qmail/bin/true /var/qmail/bin/cmd5checkpw /var/qmail/bin/true Incremental snapshot backups via rsync and sshSubmitted by sandip on Fri, 04/04/2008 - 19:35In follow-up to the previous post, I am compiling this as a separate post as this solution is been running very stable for a while with quite a few updates and changes... I will be setting up a back-up of a remote web-host via rsync over ssh and creating the snapshot style backup on the local machine. The backups are done incremental, only the files that have changed are backed up so there is very less bandwidth used during the backup and also does not cause any load on the server. These are sliced backups, meaning that you get a full backup of the last 4 days, and the last 4 weeks. So data can be restored for upto a month of back date. Below is an example listing of backups you would see. Mar 11 - daily.0 Each of those is a full snapshot for the particular day/week. The files are all hard-linked and would only require 2 to 3 times the space used on the server. The backups should consist of web, database, email and some of the important server configuration files. »
|
User loginRecent blog posts
Who's onlineThere are currently 0 users and 1 guest online.
|