Category Archives: Web Stuff

Firefox 3.6.6 Now Available for Download

Today, Firefox launched an update to our crash protection feature to extend the amount of time Firefox will wait before terminating unresponsive plugins.

The crash protection feature, first introduced in Firefox 3.6.4, protects Windows and Linux users from crashes and freezes caused by third party plugins such as Flash and Silverlight. When a plugin crashes, users can reload the Web page to restart the plugin and continue browsing. When a plugin freezes, making the whole browser unresponsive, Firefox 3.6.4 terminates the unresponsive plugin after waiting 10 seconds. These changes were tested with a beta audience of close to one million users.

Following the release of Firefox 3.6.4 we heard from some users, mainly those using older computers, that they sometimes expect longer periods of non-responsiveness from plugins, especially with games. For these users the default timeout of 10 seconds was too short. To address this, we increased the amount of time Firefox waits for a plugin to respond before terminating it from 10 to 45 seconds. This change has been made in Firefox 3.6.6, which was released today as an automatic update for all users.

Firefox 3.6.4 with Crash Protection Now Available

Today, Mozilla is happy to release Firefox 3.6.4, the latest security and stability release for Firefox, used by nearly 400 million people around the world to browse the Web. This release provides crash protection for Windows and Linux users by isolating third-party plugins when they crash.

Results from our beta testing show Firefox 3.6.4 will significantly reduce the number of Firefox crashes experienced by users who are watching online videos or playing games. When a plugin crashes or freezes while using Firefox, users can enjoy uninterrupted browsing by simply refreshing the page.

Mozilla recognizes that third-party plugins provide important functionality in many of today’s websites. At the same time, plugins can lead to problems for users as they browse. With the ability to automatically alert users when they have out of date plugins, and now crash protection, Firefox 3.6.4 allows users to experience all the content they love without any of the hassles. (If you’re not running Firefox, Mozilla recommends that you make a habit of visiting the Plugin Check page to keep your plugins up to date.)

At this time Firefox offers crash protection for Adobe Flash, Apple Quicktime and Microsoft Silverlight on Windows and Linux computers. Support for other plugins and operating systems will become available in a future Firefox release.

All Firefox users are encouraged to upgrade for free by using the “Check for Updates” function in the Help menu, or by visiting www.firefox.com. For more information, please visit:

Firefox to implement calc() in CSS3

I’ve just read last week that Firefox would implement calc() in their new version of Firefox. It’s not out in any publicly released version yet but it is coming as they talked about it on their developers blog.

Why should you care?

Ever had an element that needed to have a percentage width with padding? Up until now, because of the way the box model works in modern browsers, you had to wrap the content in a container to which you’d apply the padding. That’s because when you define padding to an element the value is added to the width, which is the proper way of doing it.

Now the thing is that percentage width don’t mix well with fixed padding values, that where calc() comes into play.

Lets say you have a box with the following css rules:

#box {
padding: 20px;
width: 60%;
}

If that box is in a container 1000 pixels wide, the box would be 640 pixels wide total, that’s not what you want, that’s not 60% wide! You coud use calc() to achieve the desired result:

#box {
width: -moz-calc(60% - 40px);
}

You can even be wild and do basic calculations in there:

#box {
width: -moz-calc(60% - 2 * 20px);
}

Taking it further

What would be even cooler is if you could get the actual padding values as a variable in calc(). Something like:

#box {
width: -moz-calc(60% - (padding-left + padding-right));
}

This way you could even forget about updating your width value when you play with your padding.

Oh well, one day maybe, now let’s just see if other browser vendors implement something of the like. Let’s not forget this is not part of the W3C CSS3 specs.

Is it a feature you were waiting for?

Using Anchor Text Effeciently

One of the most underused things by Newbies in regards to linking is “Anchor Text”. This is the visible text showing in a link.

The prime mistake Newbies make is to put their website name into the Anchor Text. Unless your website contains your keywords this is a waste of a perfectly good link. Remember that Google puts a very big importance on those Anchor texts and they should always use your keywords.

The second mistake is trying to put every single keyword into your anchor text and give that to everyone. There are two mistakes with this technique.
1.) Google assigns weight to each word in anchor text so if there are a lot of filler words (common in long sentences), they will “dilute” your target words

2.) You should vary your text throughout your links. That is, change it every 20 or so. This just makes sense. If your links were placed naturally, there would never be 300 links all with the EXACT same anchor text.

So with all that in mind here is an example:

You sell Blue Widgets in England and want to rank first for “Cheap Blue Widgets in England”
Your anchor text could be varied between the following:
Cheap Widgets
Blue Widgets
Widgets in England
Cheap Blue Widgets

Hopefully this helps you start an effective campaign.

Why Google Indexing Requires A Complex Blend Of Skills

If it was easy, everybody would be doing it. Getting a company’s name and products, or services, onto the first page of a genuine Google search isn’t a trivial piece of work. In fact, there are four distinct skills that a search engine optimizer needs to possess. Most people possess one or maybe two of these skills, very rarely do people posses all four. In truth, to get to all four, people who are good at two of these need to actively develop the other skills. Now, if you are running your own business, do you really have the time to do this? Is this the best use of your time?

Specifically the four skills needed for SEO work are:
Web Design – producing a visually attractive page
HTML coding – developing Search Engine friendly coding that sits behind the web design
Copy writing – producing the actual readable text on the page
Marketing – what are the actual searches that are being used, what key words actually get more business for your company?

Many website designers produce more and more eye-catching designs with animations and clever rollover buttons hoping to entice the people onto their sites. This is the first big mistake; using designs like these will actually decrease your chances of a high Google rating. Yes, that’s right; all that money you have paid for the website design could be wasted because no-one will ever find your site.

The reason for this is that before you get people to your site you need to get the spider bots to like your site. Spider bots are pieces of software used by the search engine companies to trawl the Internet looking at all the websites, and then having reviewed the sites, they use complex algorithms to rank the sites. Some of the complex techniques used by web designers cannot be trawled by spider bots. They come to your site, look at the HTML code and exit stage right, without even bothering to rank your site. So, you will not be found on any meaningful search.

I am amazed how many times I look at websites and I immediately know they are a waste of money. The trouble is that both the web designers and the company that paid the money really do not want to know this. In fact, I have stopped playing the messenger of bad news (too many shootings!); I now work round the problem. So, optimizing a website to be Google friendly is often a compromise between a visually attractive site and an easy to find site.

The second skill is that of optimizing the actual HTML code to be spider bot friendly. I put this as different to the web design because you really do need to be “down and dirty” in the code rather than using an editor like dreamweaver, which is OK for website design. This skill takes lots of time and experience to develop, and just when you think you have cracked it, the search engine companies change the algorithms used to calculate how high your site will appear in the search results.

This is no place for even the most enthusiastic amateur. Results need to be constantly monitored, pieces of code added or removed, and a check kept on what the competition are doing. Many people who design their own website feel they will get searched because it looks good, and totally miss out this step. Without a strong technical understanding of how spider bots work, you will always struggle to get your company on the first results page in Google.

Thirdly, I suggested that copy writing is a skill in its own right. This is the writing of the actual text that people coming to your site will read. The Google bot and other spider bots like Inktomi, love text – but only when written well in proper English. Some people try to stuff their site with keywords, while others put white writing on white space (so spider bots can see it but humans cannot).

Spider bots are very sophisticated and not only will not fall for these tricks, they may actively penalize your site – in Google terms, this is sand boxing. Google takes new sites and “naughty” sites and effectively sin-bins them for 3-6 months, you can still be found but not until results page 14 – really useful! As well as good English, the spider bots are also reading the HTML code, so the copy writer also needs an appreciation of the interplay between the two. My recommendation for anyone copy writing their own site is to write normal, well-constructed English sentences that can be read by machine and human alike.

The final skill is marketing, after all this is what we are doing – marketing you site and hence company and products/services on the Web. The key here is to set the site up to be accessible to the searches that will provide most business to you. I have seen many sites that can be found as you key in the company name. Others that can be found by keying in “Accountant Manchester North-West England”, which is great, except no-one ever actually does that search. So the marketing skill requires knowledge of a company’s business, what they are really trying to sell and an understanding of what actual searches may provide dividends.

I hope you will see that professional Search Engine Optimization companies need more than a bit of web design to improve your business. Make sure anyone you choose for SEO work can cover all the bases.

XHTML – Kicking And Screaming Into The Future

XHTML, the standard, was first released back in 2000. Roughly five years later we begin to see major websites revised to use this standard. Even the favorite whipping boy of standards-compliance punditry, Microsoft, presents their primary homepages, msn.com and microsoft.com in XHTML. Standards compliant XHTML sites are still the minority. The reason is simple. When the W3C released the new standard, the rest of the web running on HTML did not cease to function. Nor will the rest of the web, written in various flavors of HTML, cease to function any time soon. Without any pressing need to conform to the new standard, designers continue to use old, familiar methods. These methods will perform in any modern browser, so why bother switching?

These sentiments are similar to ones I experienced. A kind of “if it’s not broke, don’t fix it” mentality sets in. Whether HTML was “broken” or not is a different argument. To the casual Internet user, their standards are fairly direct. If a site displays without noticeable error and functions to their satisfaction, these standards are met. Whatever additional steps the browser took to make such display possible is irrelevant to most users. This kind of mentality is difficult to overcome in designers accustomed to their old methods.

Technical obstacles to adopting XHTML may be quite steep as well, especially as regards large, existing websites with complex scripting. Yet the time may eventually come where yesterday’s “tried and true” HTML is little more than an ancient language, unable to be interpreted by modern electronic devices. Whether one agrees with the direction the W3C takes in the development of HTML is irrelevant, you are just along for the ride. With some perseverance, getting the hang of XHTML is possible. In form, it is not as different from HTML as Japanese is from English. Knowing HTML grants a basic knowledge of the language, it simply becomes a matter of learning a particular dialect. Even an original nay-sayer such as myself managed to do it.

Benefits of XHTML
There are 2 primary benefits to using XHTML. First is the strict nature of valid XHTML documents. “Valid” documents contain no errors. Documents with no errors can be parsed more easily by a browser. Though the time saved is, admittedly, negligible from the human user’s point of view, there is a greater efficiency to the browser’s performance. Most modern browsers will function well in what’s usually referred to as “quirks” mode, where, in the absence of any on-page information about the kind of HTML they are reading, present a “best guess” rendering of a page. The quirks mode will also forgive many errors in the HTML. Modern browsers installed on your home computer have the luxury of size and power to deal with these errors. When browser technology makes the leap to other appliances it may not have the size and power to be so forgiving. This is where the strict, valid documents demanded by the XHTML standard become important.

The second benefit is in the code itself, which is cleaner and more compact than common, “table” based layout in HTML. Though XHTML retains table functionality, the standard makes clear tables are not to be used for page layout or anything other than displaying data in a tabular format. This is generally the primary obstacle most designers have with moving to XHTML. The manner in which many designers have come to rely on to layout and organize their pages is now taboo. Simple visual inspection of XHTML code reveals how light and efficient it is in comparison to a table based HTML layout. XTHML makes use of Cascading Style Sheets (CSS), which, when called externally, remove virtually all styling information from the XHTML document itself. This creates a document focused solely on content.

XHTML makes use of “div” tags to define content areas. How these “divisions” are displayed is controlled by CSS. This is known as CSS-P, or CSS Positioning. Trading in “table” tags for “divs” can be tough. Learning a new way of accomplishing an already familiar task is generally difficult. Like learning to use a different design program or image editor, frustration can be constant. Looking at “divs” as a kind of table cell might be helpful, though they are not entirely equivalent. As required by the XHTML standard, always make sure there is a DOCTYPE definition at the top of the document. This is not only required by the standard, but it will force Internet Explorer 6, currently the most common browser, to enter its “standards compliance” mode. IE6 and Firefox, both operating in standards compliance mode will display XHTML in much the same way. Not identical, but far better than IE6 operating in quirks mode. Learning how to iron out the final differences between displays is the final obstacle and can require a bit of tweaking in the CSS.

Clean code has multiple benefits. It creates a smaller page size which, over time, can save costs associated with transfer usage. Though the size difference may appear small, for someone running a highly trafficked site, even saving a few kilobytes of size can make a big difference. Further, some believe search engines may look more kindly on standards complaint pages. This is only a theory, though. In a general sense, any page modification that makes the content easier to reach and higher in the code is considered wise. Search engines, so it is believed, prefer to reach content quickly, and give greater weight to the first content they encounter. Using XHTML and “div” layout allows designers to accomplish this task more easily.

Conclusions
XHTML is the current standard set by the W3C. The W3C continues development of XHTML, and XHTML 2.0 will replace the current standard in the future. Learning and using XHTML today will help designers prepare for tomorrow. Valid XTHML produces no errors that might slow down a browser, and the code produced is clean and efficient. This saves in file size and helps designers better accomplish their search engine optimization goals. Learning XHTML is primarily about learning a new way to lay out pages. Though frustrating at first, the long term benefits far outweigh any initial inconvenience.

A Review on “Power Shortcuts for Adobe Photoshop CS”

If you want to produce outstanding and award-winning movies instantly, you can count on Adobe Photoshop CS software and its integrated web production application software. Photographers, graphic artists, web designers, and video professionals can take advantage of its indispensable features such as new design possibilities, improved file management, a more intuitive way to create for the web, and support for 16-bit images, digital camera raw data and non-square pixels. With it, you can truly create the highest quality results more efficiently than ever before. Truly, Adobe Photoshop CS is destined to become important.

According to the article “Power Shortcuts for Adobe Photoshop CS” by Michael Ninness which was posted at http://movielibrary.lynda.com, the book is a series of movie-based tutorials designed to help Photoshop users to become faster and more productive. Although the tutorials are grouped by topic, each movie is packed with time saving tips and shortcuts and can be viewed independently. The included topics are palette shortcuts, screen mode shortcuts, navigation shortcuts, selection shortcuts, type shortcuts, dialog box shortcuts, file browser shortcuts, view shortcuts, layer shortcuts, and image adjustment shortcuts.

Power Shortcuts for Adobe Photoshop CS brings with it many new things. I absolutely don’t agree with those people who say that it is difficult to learn Photoshop CS when in fact, it isn’t! Tutorials, such as Power Shortcuts for Adobe Photoshop CS, can help you in the learning process. Those of you who are not afraid of experimentation especially in movie making, this is the perfect time for you to enjoy this great field and become a movie making savvy person. All you need to learn, especially if you’re learning on your own, are the desire and passion to improve your digital images, the flair to experiment, and excellent tutorials to guide you along the process.

You’ll truly enjoy using Power Shortcuts for Adobe Photoshop CS because there are endless possibilities to create anything that you desire. Of course, you have to be familiar and must be fairly advanced with the Photoshop CS program – this is a big thing. And, I can only assure you of one thing – Power Shortcuts for Adobe Photoshop CS is worth every penny that you’ll spend for it. I must say that it’s one tutorial that’s worth buying.

Truly, it’s hard to believe that Adobe keeps on producing these kinds of stuff to help its avid users in the completion of their art projects. Now, creating films is very much simple through Power Shortcuts for Adobe Photoshop CS. Everything about these tutorials is a plus. Definitely! With it, you can totally do loads of other things and you’ll surely love Power Shortcuts for Adobe Photoshop CS. Congratulations and more power to Michael Ninness!

XHTML – Kicking And Screaming Into The Future

XHTML, the standard, was first released back in 2000. Roughly five years later we begin to see major websites revised to use this standard. Even the favorite whipping boy of standards-compliance punditry, Microsoft, presents their primary homepages, msn.com and microsoft.com in XHTML. Standards compliant XHTML sites are still the minority. The reason is simple. When the W3C released the new standard, the rest of the web running on HTML did not cease to function. Nor will the rest of the web, written in various flavors of HTML, cease to function any time soon. Without any pressing need to conform to the new standard, designers continue to use old, familiar methods. These methods will perform in any modern browser, so why bother switching?

These sentiments are similar to ones I experienced. A kind of “if it’s not broke, don’t fix it” mentality sets in. Whether HTML was “broken” or not is a different argument. To the casual Internet user, their standards are fairly direct. If a site displays without noticeable error and functions to their satisfaction, these standards are met. Whatever additional steps the browser took to make such display possible is irrelevant to most users. This kind of mentality is difficult to overcome in designers accustomed to their old methods.

Technical obstacles to adopting XHTML may be quite steep as well, especially as regards large, existing websites with complex scripting. Yet the time may eventually come where yesterday’s “tried and true” HTML is little more than an ancient language, unable to be interpreted by modern electronic devices. Whether one agrees with the direction the W3C takes in the development of HTML is irrelevant, you are just along for the ride. With some perseverance, getting the hang of XHTML is possible. In form, it is not as different from HTML as Japanese is from English. Knowing HTML grants a basic knowledge of the language, it simply becomes a matter of learning a particular dialect. Even an original nay-sayer such as myself managed to do it.

Benefits of XHTML
There are 2 primary benefits to using XHTML. First is the strict nature of valid XHTML documents. “Valid” documents contain no errors. Documents with no errors can be parsed more easily by a browser. Though the time saved is, admittedly, negligible from the human user’s point of view, there is a greater efficiency to the browser’s performance. Most modern browsers will function well in what’s usually referred to as “quirks” mode, where, in the absence of any on-page information about the kind of HTML they are reading, present a “best guess” rendering of a page. The quirks mode will also forgive many errors in the HTML. Modern browsers installed on your home computer have the luxury of size and power to deal with these errors. When browser technology makes the leap to other appliances it may not have the size and power to be so forgiving. This is where the strict, valid documents demanded by the XHTML standard become important.

The second benefit is in the code itself, which is cleaner and more compact than common, “table” based layout in HTML. Though XHTML retains table functionality, the standard makes clear tables are not to be used for page layout or anything other than displaying data in a tabular format. This is generally the primary obstacle most designers have with moving to XHTML. The manner in which many designers have come to rely on to layout and organize their pages is now taboo. Simple visual inspection of XHTML code reveals how light and efficient it is in comparison to a table based HTML layout. XHTML makes use of Cascading Style Sheets (CSS), which, when called externally, remove virtually all styling information from the XHTML document itself. This creates a document focused solely on content.

XHTML makes use of “div” tags to define content areas. How these “divisions” are displayed is controlled by CSS. This is known as CSS-P, or CSS Positioning. Trading in “table” tags for “divs” can be tough. Learning a new way of accomplishing an already familiar task is generally difficult. Like learning to use a different design program or image editor, frustration can be constant. Looking at “divs” as a kind of table cell might be helpful, though they are not entirely equivalent. As required by the XHTML standard, always make sure there is a DOC-TYPE definition at the top of the document. This is not only required by the standard, but it will force Internet Explorer 6, currently the most common browser, to enter its “standards compliance” mode. IE6 and Firefox, both operating in standards compliance mode will display XHTML in much the same way. Not identical, but far better than IE6 operating in quirks mode. Learning how to iron out the final differences between displays is the final obstacle and can require a bit of tweaking in the CSS.

Clean code has multiple benefits. It creates a smaller page size which, over time, can save costs associated with transfer usage. Though the size difference may appear small, for someone running a highly trafficked site, even saving a few kilobytes of size can make a big difference. Further, some believe search engines may look more kindly on standards complaint pages. This is only a theory, though. In a general sense, any page modification that makes the content easier to reach and higher in the code is considered wise. Search engines, so it is believed, prefer to reach content quickly, and give greater weight to the first content they encounter. Using XHTML and “div” layout allows designers to accomplish this task more easily.

Conclusions
XHTML is the current standard set by the W3C. The W3C continues development of XHTML, and XHTML 2.0 will replace the current standard in the future. Learning and using XHTML today will help designers prepare for tomorrow. Valid XHTML produces no errors that might slow down a browser, and the code produced is clean and efficient. This saves in file size and helps designers better accomplish their search engine optimization goals. Learning XHTML is primarily about learning a new way to lay out pages. Though frustrating at first, the long term benefits far outweigh any initial inconvenience.

Top 10 Search Engine Positioning Mistakes!

Search Engine Positioning is the art of optimizing your web site so that it gets into a high position on the search engine results page whenever someone searches for keywords that relate to your products and services.

However, some people make basic mistakes while designing their web site and as a result, never make it to the top. Even if they work hard on it! Or may be waste a lot of money on useless tools and services.

Do you make these mistakes too?

1. Designing a Frames-based web site
This one is the biggest loser of them all. Frames may make the job of maintaining a very big and complicated web site easy but search engine absolutely hate them. Most of the search engines cannot find out their way easily through them and end up indexing only the home page.

Now imagine this. One of your internal pages has been reported by the search engines and the user has clicked on it. What a mess! The page looks orphan without the outer frame and the navigation.

Lose your frames right away. You will start getting positive improvements the moment you redesign your site without frames.

2. Having an all-Flash or graphic-only home page

This is another classic mistake. Many designers design web site home pages like brochures. A beautiful cover which has to be opened to read. But on the Internet every click takes away some prospects. Did they click your ENTER button or the Back button?

You see, search engines need content to index. If you don’t have content on the home page but only a Flash movie or a big animated graphic, how will the search engine know what you deal in. And why will it give you a high enough ranking?

3. Not having a good title

What’s your title, Sir?

A good title is an absolute must for getting a good search engine position and the most vital thing — the click-through. With the title, you are always walking a tightrope. You need a title with your most important keyword near the beginning but it should still appeal to the human reading the results.

Don’t, don’t stuff it with the keywords. How does this look to you —

Search engine position, search engine positioning, search engine ranking

If you saw this in the search engine results, will you click on this or you will prefer-

Top 10 Search engine positioning mistakes!

4. Hosting your site with a FREE host

It takes away all your credibility. You want to do business from your web site. Right? And you can’t even afford a decent web hosting package. How do you expect your prospect to trust you?

Most of the search engines do not spider web sites hosted on the free hosts. Even if they do, they rank them quite low. How many geocities web sites have you seen in the top 10?

Also, will you be comfortable buying your merchandise from someone who can’t even afford a small shop? And web site hosting is much cheaper!

Do you want your visitor to look at your message or look at the pop-up that your free web host popped over your site?

Go get a good web hosting package right away.

5. Putting all links on Javascript

Google and many other search engines don’t read and process JavaScript. So if you have all your links on JavaScript only, Google is blind to them.

You must have at least one text-based link to all the pages that you want to link to. And the anchor text (the visible text on the site) should contain your important keywords, not “Click here”.

6. Stuffing lots of keywords in the keywords tag

Do you have a keywords tag that lists all the words related to your product in a big long series? This is a certain recipe to invite negative points.

While many search engines have already started to ignore keywords tag precisely because of this misuse, you should have the keywords tag for the search engines that still use them. It also serves as a reminder of the keywords that you are optimizing for.

However, put only the 2-3 most important keywords in there. Here’s a quick test – don’t put any term in the keywords tag if it does not appear at least once on the body copy.

7. Not having any outgoing links

Do you know why the Internet is called the Web? Because the web sites link to each other. If you are only having incoming links but don’t have any outbound links, it is not appreciated by the search engines as it violates the web-like structure of the Web.

Because some people try to conserve PageRank (a proprietary index used by Google to measure link popularity), they avoid having any outbound links. This is one big myth. You can get very good points if you have some outbound links with keyword-rich anchor text and preferably keyword-rich target URL also.

Of course, you should not turn your web page into a link-farm. There should be a few good links amidst some good content.

8. Insisting on session variables and cookies to show information

Session variables are used extensively by ecommerce-enabled sites. This is to trace the path used by the visitor. Shopping cart and various other applications also benefit by using session variables. However it should be possible to visit the various information related and sales pages without needing to have session variables.

Since you can’t put cookies on the search engine spiders, they can’t index your pages properly if the navigation requires cookies and session variables.

9. Regularly submitting your site to the search engines

“We will submit your site to the top 250,000 search engines every month for only $29.95.” Who has not seen these ads or received Spam with similar messages?

And which are those 250,000 search engines? There are only about 8-10 top search engines worth bothering about. And a handful of directories.

With most of the search engines, you only need to submit once to get spidered and then they will keep your listing fresh by crawling your site at regular intervals. All you need to do is to keep adding fresh content to your site and the search engines will absolutely love you. In fact, Google prefers to locate you through a link and not through the URL submission page.

For some sites like DMOZ, if you resubmit while you are waiting to be indexed, you entry is pushed to the end of the queue. So you can resubmit regularly and never get indexed 🙁

10. Optimizing for more than 2 or 3 search terms

It is virtually impossible to optimize a page for more than 2-3 keywords without diluting everything. Don’t try to work on more than 3 phrases on one page. Split.

Get similar phrases together and work on those in this page. Take 2 or 3 out of the other phrases and develop a new page with entirely new copy. Remember, you cannot just copy the same page and squeeze these new phrases in there. It will look very funny to the visitor.

Domain Name Scams: Have You Fallen for One?

Have you received information regarding your domain name through postal mail? Did you receive numerous invoices regarding your domain name? Ever been stuck in a contract for well over a year? If any of the above apply to you, you may have fallen for a domain name scam.

Domain name scams occur when a domain name registrar sends you and “invoice” to renew your domain registration, but it really is an agreement to switch to their services.

Scott Karlo, founder of Internet Know How (IKH), LLC, offers these helpful tips on how prevent your company from falling for domain name scams.

Prevent Domain Name Scams by:

• Setting up automated renewals with timely advanced warnings prior to billing period. This allows you to know when a renewal date is scheduled and in turn you can ignore any other “invoices” you may receive from other companies since you know exactly how much and when your next payment is due.

• Using the registrar’s domain locking feature to protect it from being taken without authorization. This prevents registrar companies from initiating a transfer request to pull your domain to their system. By locking your domain you save yourself the time, hassle and cost of switching back away from them.

• Carefully review invoices from the registrar companies before mailing in the requested payment due. Many companies assume that the invoice is real. It passes through accounts payable, who recognize the web address, pay it. Since they are fearful of losing the domain they don’t tend to question the invoices legitimacy. Investigate who the invoice is from before paying it!

• Finding an internet consulting firm that will be able to monitor your domain names on a regular basis and keep track of renewal dates. Hiring an internet consulting firm relieves the headache of domain name scams. Your company can ignore all mailings regarding your website domain registration.