Jump to content


  • Posts

  • Joined

  • Last visited

editor's Achievements


Contributor (5/14)

  • First Post
  • Collaborator
  • Conversation Starter
  • Week One Done
  • One Month Later

Recent Badges



  1. Nice upgrade! I have two active domains hosted by TCH but only one shows up in the billing portal. Is there an easy way to add the other?
  2. Thanks for your help. I checked my scripts against the Secunia database and found no matches. I rechecked all my cgi scripts as well as all my folders and sub folders, and they're all 755. I run NOD32 Anti Virus software (real-time protection) and also regularly scan with Spybot (also real-time protection) and Ad-Aware, so I'm reasonably certain that my computer is secure.
  3. Thanks for your response. I appreciate your help. This begs an interesting question: How would the average Web site owner "check" his/her scripts? Do the TCH techs review scripts for safety if asked to do so? Is there some other way to ensure that scripts are safe? I use a number of very small PERL scripts on my site. I have only a minimal knowledge of PERL but enough to modify them as needed so that they do what I want them to do. I can't see anything in them that looks suspicious to *my* eyes, and most have been in continuous use by me for years with no issues. The only permissions that I have modified on my site are for the cgi scripts, and they are all set to CHMOD 755, which is rwxrxrx I believe. That's what all the instructions say to do. Is rwxrxrx safe? Based on this definition, I would say that I'm probably not using any unsecured applications. I'm not running a forum or other third-party applications. I use a simple shopping cart, but it's JavaScript-based. I've used it for many years without any issues. All this still leaves me scratching my head. I wonder if I'll ever know how those rogue files came to be on my Web site, and who put them there?
  4. I discovered that my site was recently hacked. I found bad files that I had never uploaded, some that required help from TCH to remove. All is back to normal now (I hope). The TCH tech who helped me provided these tips to prevent hacking: (1) I understand what complex passwords are and am now using them. (2) I will be changing my passwords more frequently from now on. (3) I don't know what is meant by "unsecured scripts." (4) I don't know what is meant by "full permissions to files/folders." (5) I keep my site fully backed up. (6) I don't know what is meant by "unsecured applications." Could someone help me to understand points 3, 4 and 6, please. Thanks for whatever assistance you can offer!
  5. Thanks for the tip. Before I password protected the single file in my directory (see my reply to Bruce above), I tried adding the trailing slash and found that it did indeed eliminate the multiple prompts. Knowing this may be useful in the future. Thanks again.
  6. I was simply trying to protect the single file within that directory. I don't know why it didn't occur to me to try protecting THAT file rather than the whole directory. I've now done that and everything seems to be working well. Thanks for your input.
  7. I'm trying to protect a directory with a password through cPanel using .htaccess. I followed the tutorial provided in cPanel and thought that I had done everything correctly. However, I now find that I'm routinely receiving multiple prompts for username and password. In other words, when I try to access the directory, I'm prompted to enter my username and password. I do so and hit enter, only to receive another such prompt. Sometimes I need to repeat this procedure three times before finally gaining access to the directory. Does anyone know what I'm doing wrong?
  8. Thanks for the advice. I don't know perl or php well enough to write such a script from scratch. I've just spent more than an hour searching the Internet via Google as well as looking as sites like Hotscripts but can't seem to locate anything suitable. I've found scripts like dbsender and backup2mail but they're designed to work with MySQL databases. I'm dealing with a single file (which would probably require a much simpler script). Does anyone know of a script that will work for me?
  9. I'm running a cron job that backs up a single directory every night. It looks like this: tar -czvf /home2/******/backup.tar.gz /home2/******/data/board/users Each time it runs it sends an e-mail that lists the files in the "users" directory that have been compressed into the tar file. Is there a way to change the cron job so that the actual backup.tar.gz file is e-mailed to me each night instead of the list?
  10. I believe a program called Empty Temp Folders will do that for you. It can be set to delete zero byte files. http://www.danish-shareware.dk/soft/emptemp/index.html
  11. I submitted the ticket, the modules were installed swiftly, and the script now works beautifully! Thanks again for all your help!
  12. I appreciate the help. However, the modified script generates a "Premature end of script headers" error.
  13. I have a perl script that records member logins on my Web site. Here's part of the code: _____ open(FILE, ">>$log_file"); {($sec,$min,$hour,$day,$month) = localtime(); $month++; if ($hour < 10) { $hour = "0$hour"; } if ($min < 10) { $min = "0$min"; } } if ($password ne "xxxx") { print FILE "$month/$day $hour:$min $password $ENV{'REMOTE_ADDR'}\n"; close(FILE); _____ The problem is that the time printed to the log file for each login is one hour ahead of the time where I live. Presumably, localtime() on the server where my Web site resides (Server 24) is Eastern Time, whereas I live in the Central Time zone. What changes could I make to the script so that the time it prints to the log file matches my own time zone?
  14. Thanks to all who replied to this topic, and especially to TCH-Bruce. Just "talking" it through has given me clarity concerning how to proceed. I began by stating that "I'm a very happy TCH customer overall," and I remain so. The helpfulness demonstrated here is just one of many reasons why!
  15. Thanks for stickin' with me on this, Bruce. I need just a little bit more help in order to get my head wrapped around what you're telling me... I've already asked my subscribers to whitelist "my_business@my_domain.com". If I send them mail that originates from "my_home@my_isp.com" but have "my_business@my_domain.com" in the "From:" and "Reply-to:" fields, are you saying that the message should make it past most spam filters? No offense, but aren't spam filters supposed to be "smarter" than that? Also, isn't it possible that some ISP's may have rules in place forbidding their users from doing this sort of thing (that is, from faking addresses in outgoing e-mail)?
  • Create New...