Tag Archives: CSS

Firefox to implement calc() in CSS3

I’ve just read last week that Firefox would implement calc() in their new version of Firefox. It’s not out in any publicly released version yet but it is coming as they talked about it on their developers blog.

Why should you care?

Ever had an element that needed to have a percentage width with padding? Up until now, because of the way the box model works in modern browsers, you had to wrap the content in a container to which you’d apply the padding. That’s because when you define padding to an element the value is added to the width, which is the proper way of doing it.

Now the thing is that percentage width don’t mix well with fixed padding values, that where calc() comes into play.

Lets say you have a box with the following css rules:

#box {
padding: 20px;
width: 60%;
}

If that box is in a container 1000 pixels wide, the box would be 640 pixels wide total, that’s not what you want, that’s not 60% wide! You coud use calc() to achieve the desired result:

#box {
width: -moz-calc(60% - 40px);
}

You can even be wild and do basic calculations in there:

#box {
width: -moz-calc(60% - 2 * 20px);
}

Taking it further

What would be even cooler is if you could get the actual padding values as a variable in calc(). Something like:

#box {
width: -moz-calc(60% - (padding-left + padding-right));
}

This way you could even forget about updating your width value when you play with your padding.

Oh well, one day maybe, now let’s just see if other browser vendors implement something of the like. Let’s not forget this is not part of the W3C CSS3 specs.

Is it a feature you were waiting for?

XHTML – Kicking And Screaming Into The Future

XHTML, the standard, was first released back in 2000. Roughly five years later we begin to see major websites revised to use this standard. Even the favorite whipping boy of standards-compliance punditry, Microsoft, presents their primary homepages, msn.com and microsoft.com in XHTML. Standards compliant XHTML sites are still the minority. The reason is simple. When the W3C released the new standard, the rest of the web running on HTML did not cease to function. Nor will the rest of the web, written in various flavors of HTML, cease to function any time soon. Without any pressing need to conform to the new standard, designers continue to use old, familiar methods. These methods will perform in any modern browser, so why bother switching?

These sentiments are similar to ones I experienced. A kind of “if it’s not broke, don’t fix it” mentality sets in. Whether HTML was “broken” or not is a different argument. To the casual Internet user, their standards are fairly direct. If a site displays without noticeable error and functions to their satisfaction, these standards are met. Whatever additional steps the browser took to make such display possible is irrelevant to most users. This kind of mentality is difficult to overcome in designers accustomed to their old methods.

Technical obstacles to adopting XHTML may be quite steep as well, especially as regards large, existing websites with complex scripting. Yet the time may eventually come where yesterday’s “tried and true” HTML is little more than an ancient language, unable to be interpreted by modern electronic devices. Whether one agrees with the direction the W3C takes in the development of HTML is irrelevant, you are just along for the ride. With some perseverance, getting the hang of XHTML is possible. In form, it is not as different from HTML as Japanese is from English. Knowing HTML grants a basic knowledge of the language, it simply becomes a matter of learning a particular dialect. Even an original nay-sayer such as myself managed to do it.

Benefits of XHTML
There are 2 primary benefits to using XHTML. First is the strict nature of valid XHTML documents. “Valid” documents contain no errors. Documents with no errors can be parsed more easily by a browser. Though the time saved is, admittedly, negligible from the human user’s point of view, there is a greater efficiency to the browser’s performance. Most modern browsers will function well in what’s usually referred to as “quirks” mode, where, in the absence of any on-page information about the kind of HTML they are reading, present a “best guess” rendering of a page. The quirks mode will also forgive many errors in the HTML. Modern browsers installed on your home computer have the luxury of size and power to deal with these errors. When browser technology makes the leap to other appliances it may not have the size and power to be so forgiving. This is where the strict, valid documents demanded by the XHTML standard become important.

The second benefit is in the code itself, which is cleaner and more compact than common, “table” based layout in HTML. Though XHTML retains table functionality, the standard makes clear tables are not to be used for page layout or anything other than displaying data in a tabular format. This is generally the primary obstacle most designers have with moving to XHTML. The manner in which many designers have come to rely on to layout and organize their pages is now taboo. Simple visual inspection of XHTML code reveals how light and efficient it is in comparison to a table based HTML layout. XTHML makes use of Cascading Style Sheets (CSS), which, when called externally, remove virtually all styling information from the XHTML document itself. This creates a document focused solely on content.

XHTML makes use of “div” tags to define content areas. How these “divisions” are displayed is controlled by CSS. This is known as CSS-P, or CSS Positioning. Trading in “table” tags for “divs” can be tough. Learning a new way of accomplishing an already familiar task is generally difficult. Like learning to use a different design program or image editor, frustration can be constant. Looking at “divs” as a kind of table cell might be helpful, though they are not entirely equivalent. As required by the XHTML standard, always make sure there is a DOCTYPE definition at the top of the document. This is not only required by the standard, but it will force Internet Explorer 6, currently the most common browser, to enter its “standards compliance” mode. IE6 and Firefox, both operating in standards compliance mode will display XHTML in much the same way. Not identical, but far better than IE6 operating in quirks mode. Learning how to iron out the final differences between displays is the final obstacle and can require a bit of tweaking in the CSS.

Clean code has multiple benefits. It creates a smaller page size which, over time, can save costs associated with transfer usage. Though the size difference may appear small, for someone running a highly trafficked site, even saving a few kilobytes of size can make a big difference. Further, some believe search engines may look more kindly on standards complaint pages. This is only a theory, though. In a general sense, any page modification that makes the content easier to reach and higher in the code is considered wise. Search engines, so it is believed, prefer to reach content quickly, and give greater weight to the first content they encounter. Using XHTML and “div” layout allows designers to accomplish this task more easily.

Conclusions
XHTML is the current standard set by the W3C. The W3C continues development of XHTML, and XHTML 2.0 will replace the current standard in the future. Learning and using XHTML today will help designers prepare for tomorrow. Valid XTHML produces no errors that might slow down a browser, and the code produced is clean and efficient. This saves in file size and helps designers better accomplish their search engine optimization goals. Learning XHTML is primarily about learning a new way to lay out pages. Though frustrating at first, the long term benefits far outweigh any initial inconvenience.

XHTML – Kicking And Screaming Into The Future

XHTML, the standard, was first released back in 2000. Roughly five years later we begin to see major websites revised to use this standard. Even the favorite whipping boy of standards-compliance punditry, Microsoft, presents their primary homepages, msn.com and microsoft.com in XHTML. Standards compliant XHTML sites are still the minority. The reason is simple. When the W3C released the new standard, the rest of the web running on HTML did not cease to function. Nor will the rest of the web, written in various flavors of HTML, cease to function any time soon. Without any pressing need to conform to the new standard, designers continue to use old, familiar methods. These methods will perform in any modern browser, so why bother switching?

These sentiments are similar to ones I experienced. A kind of “if it’s not broke, don’t fix it” mentality sets in. Whether HTML was “broken” or not is a different argument. To the casual Internet user, their standards are fairly direct. If a site displays without noticeable error and functions to their satisfaction, these standards are met. Whatever additional steps the browser took to make such display possible is irrelevant to most users. This kind of mentality is difficult to overcome in designers accustomed to their old methods.

Technical obstacles to adopting XHTML may be quite steep as well, especially as regards large, existing websites with complex scripting. Yet the time may eventually come where yesterday’s “tried and true” HTML is little more than an ancient language, unable to be interpreted by modern electronic devices. Whether one agrees with the direction the W3C takes in the development of HTML is irrelevant, you are just along for the ride. With some perseverance, getting the hang of XHTML is possible. In form, it is not as different from HTML as Japanese is from English. Knowing HTML grants a basic knowledge of the language, it simply becomes a matter of learning a particular dialect. Even an original nay-sayer such as myself managed to do it.

Benefits of XHTML
There are 2 primary benefits to using XHTML. First is the strict nature of valid XHTML documents. “Valid” documents contain no errors. Documents with no errors can be parsed more easily by a browser. Though the time saved is, admittedly, negligible from the human user’s point of view, there is a greater efficiency to the browser’s performance. Most modern browsers will function well in what’s usually referred to as “quirks” mode, where, in the absence of any on-page information about the kind of HTML they are reading, present a “best guess” rendering of a page. The quirks mode will also forgive many errors in the HTML. Modern browsers installed on your home computer have the luxury of size and power to deal with these errors. When browser technology makes the leap to other appliances it may not have the size and power to be so forgiving. This is where the strict, valid documents demanded by the XHTML standard become important.

The second benefit is in the code itself, which is cleaner and more compact than common, “table” based layout in HTML. Though XHTML retains table functionality, the standard makes clear tables are not to be used for page layout or anything other than displaying data in a tabular format. This is generally the primary obstacle most designers have with moving to XHTML. The manner in which many designers have come to rely on to layout and organize their pages is now taboo. Simple visual inspection of XHTML code reveals how light and efficient it is in comparison to a table based HTML layout. XHTML makes use of Cascading Style Sheets (CSS), which, when called externally, remove virtually all styling information from the XHTML document itself. This creates a document focused solely on content.

XHTML makes use of “div” tags to define content areas. How these “divisions” are displayed is controlled by CSS. This is known as CSS-P, or CSS Positioning. Trading in “table” tags for “divs” can be tough. Learning a new way of accomplishing an already familiar task is generally difficult. Like learning to use a different design program or image editor, frustration can be constant. Looking at “divs” as a kind of table cell might be helpful, though they are not entirely equivalent. As required by the XHTML standard, always make sure there is a DOC-TYPE definition at the top of the document. This is not only required by the standard, but it will force Internet Explorer 6, currently the most common browser, to enter its “standards compliance” mode. IE6 and Firefox, both operating in standards compliance mode will display XHTML in much the same way. Not identical, but far better than IE6 operating in quirks mode. Learning how to iron out the final differences between displays is the final obstacle and can require a bit of tweaking in the CSS.

Clean code has multiple benefits. It creates a smaller page size which, over time, can save costs associated with transfer usage. Though the size difference may appear small, for someone running a highly trafficked site, even saving a few kilobytes of size can make a big difference. Further, some believe search engines may look more kindly on standards complaint pages. This is only a theory, though. In a general sense, any page modification that makes the content easier to reach and higher in the code is considered wise. Search engines, so it is believed, prefer to reach content quickly, and give greater weight to the first content they encounter. Using XHTML and “div” layout allows designers to accomplish this task more easily.

Conclusions
XHTML is the current standard set by the W3C. The W3C continues development of XHTML, and XHTML 2.0 will replace the current standard in the future. Learning and using XHTML today will help designers prepare for tomorrow. Valid XHTML produces no errors that might slow down a browser, and the code produced is clean and efficient. This saves in file size and helps designers better accomplish their search engine optimization goals. Learning XHTML is primarily about learning a new way to lay out pages. Though frustrating at first, the long term benefits far outweigh any initial inconvenience.

Streamline Your Website Pages

Squeezing the most efficient performance from your web pages is important. The benefits are universal, whether the site is personal or large and professional. Reducing page weight can speed up the browsing experience, especially if your visitors are using dial-up internet access. Though broadband access is the future, the present still contains a great deal of dial-up users. Many sites, ecommerce sites especially, cannot afford to ignore this large section of the market. Sites with a large amount of unique traffic may also save on their total monthly traffic by slimming down their web pages. This article will cover the basics of on-page optimization in both text/code and graphics.

Graphics

Graphics are the usual suspect on heavy pages. Either as a result of a highly graphic design, or a few poorly optimized images, graphics can significantly extend the load-time of a web page. The first step in graphics optimization is very basic. Decide if the graphics are absolutely necessary and simply eliminate or move the ones that aren’t. Removing large graphics from the homepage to a separate gallery will likely increase the number of visitors who “hang around” to let the homepage load. Separating larger photos or art to a gallery also provides the opportunity to provide fair warning to users clicking on the gallery that it may take longer to load. In the case of graphical buttons, consider the use of text based, CSS-styled buttons instead. Sites that use a highly graphic design, a common theme in website “templates”, need to optimize their graphics as best as possible.

Graphics optimization first involves selecting the appropriate file type for your image. Though this topic alone is fodder for far more in depth analysis, I will touch on it briefly. Images come in 2 basic varieties, those that are photographic in nature, and those that are graphic in nature. Photographs have a large array of colors all jumbled together in what’s referred to as continuous tone. Graphics, such as business logos, are generally smooth, crisp and have large areas of the same color. Photographs are best compressed into “JPEGs”. The “Joint Photographic Expert Group” format can successfully compress large photos down to very manageable sizes. It is usually applied on a sliding “quality” scale between 1-100, 1 being the most compressed and lowest quality, 100 the least and highest quality. JPEG is a “lossy” compression algorithm, meaning it “destroys” image information when applied, so always keep a copy of the original file. Graphics and logos generally work best in the “GIF”, or more recently, the “PNG” format. These formats are more efficient than JPEGs at reducing the size of images with large areas of similar color, such as logos or graphical text.

A few general notes on other media are appropriate. Other types of media such as Flash or sound files also slow down a page. The first rule is always the same, consider whether they are absolutely necessary. If you are choosing to build the site entirely in Flash, then make sure the individual sections and elements are as well compressed as possible. In the case of music, I will admit to personal bias here and paraphrase a brilliant old saying, “Websites should be seen and not heard.” Simply, music playing in the background will not “enhance” any browsing experience.

Text and Code

The most weight to be trimmed on a page will come from graphical and media elements, but it is possible to shed a few extra bytes by looking at the text and code of a web page. In terms of actual text content, there may not be much to do here. A page’s content is key not only to the user’s understanding but also search engine ranking. Removing or better organizing content is only necessary in extreme situations, where more than page weight is an issue. An example might be a long, text heavy web page requiring a lengthy vertical scrolling to finish. Such a page is common on “infomercial” sites, and violates basic design tenants beyond those related to page weight.

Code is a different story. A website’s code can be made more efficient in a variety of fashions. First, via the use of CSS, all style elements of a web page can now be called via an external file. This same file can be called on all a site’s pages, providing for a uniform look and feel. Not only is this more efficient; it is also the official recommendation from the W3C. The same may be said of XHTML and the abandonment of “table” based layout. Tables, though effective for layout, produce more code than equivalent XHTML layouts using “div” tags. Where a minimum of 3 tags are required to create a “box” with content in a table, only 1 is needed using divisions. Using XHTML and CSS in combination can significantly reduce the amount of “on page” code required by a web page. A final, relatively insignificant trick is the removal of all “white space” from your code. Browsers don’t require it; it is primarily so authors can readily read and interpret the code. The savings are minimal at best, but for sites that receive an extreme amount of traffic, even a few saved bytes will add up over time.

Conclusions

Target images and media files first when seeking to reduce the weight of a page. They are the largest components of overall page weight and simply removing them can significantly reduce total weight. The images that remain should be optimally compressed into a format appropriate for their type, photos or graphics. Avoid huge blocks of text that cause unnecessary vertical scrolling. Organize the site more efficiently to spread the information across multiple pages. Adopt XHTML and CSS to reduce the size of the on-page code, and call the CSS externally. These tips should help reduce the size of your pages and speed their delivery to your viewers.

Search Engine Optimization

The Good and the Bad of SEO ‘s From Google’s Mouth!

I recently had the opportunity to ask questions of some Google staffers. There were some questions I felt I needed to get verification on, so when I had the opportunity via a conference call I took it.

In this article I highlight some of the points made during the call so you know what Google thinks.

You know its bad when you take time from your holidays to come into work to attend a conference call. But that’s what I did a few weeks ago. You see I had to because I was going to have the opportunity to ask some Google employees specific questions on things that I’d been pretty sure about, but wanted to hear it right from the horses mouth.

The call lasted less than an hour, but in that time I found that there were many things I figured were indeed true. So lets start with the most obvious:

Is PageRank still important?

The short answer is yes  PageRank has always been important to Google. Naturally they couldn’t go into details but it is as I suspected. Google still uses the algorithm to help determine rankings. Where it falls in the algo mix, though, is up for speculation. My feeling however is that they’ve simply moved where the PageRank value is applied in the grand scheme of things. If you want to know what I think, be sure to read this article.

Are dynamic URLs bad?

Google says that a dynamic URL with 2 parameters should get indexed. When we pressed a bit on the issue we also found that URLs themselves don’t contribute too much to the overall ranking algorithms. In other words, a page named Page1.asp will likely perform as well as Keyword.asp.

The whole variable thing shouldn’t come as a surprise. It is true that Google will indeed index dynamic URLs and I’ve seen sites with as many as 4 variables get indexed. The difference however is that in almost all cases I’ve seen the static URLs outrank the dynamic URLs especially in highly competitive or even moderately competitive keyword spaces.

Is URL rewriting OK in Google’s eyes?

Again, the answer is yes, provided the URLs aren’t too long. While the length of the URL isn’t necessarily an issue, if they get extremely long they can cause problems.

In my experience, long rewritten URLs perform just fine. The important thing is the content on the page.

That was a common theme throughout the call  content is king. Sure optimized meta tags, effective interlinking and externalizing JavaScript all help, but in the end if the content isn’t there the site won’t do well.

Do you need to use the Google Sitemap tool?

If your site is already getting crawled effectively by Google you do not need to use the Google sitemap submission tool.

The sitemap submission tool was created by Google to provide a way for sites which normally do not get crawled effectively to now become indexed by Google.

My feeling here is that if you MUST use the Google sitemap to get your site indexed then you have some serious architectural issues to solve.

In other words, just because your pages get indexed via the sitemap doesn’t mean they will rank. In fact I’d bet you that they won’t rank because of those technical issues I mentioned above.

Here I’d recommend getting a free tool like Xenu and spider your site yourself. If Xenu has problems then you can almost be assured of Googlebot crawling problems. The nice thing with Xenu is that it can help you find those problems, such as broken links, so that you can fix them.

Once your site becomes fully crawlable by Xenu I can almost guarantee you that it will be crawlable and indexable by the major search engine spiders. Download it from http://home.snafu.de/tilman/xenulink.html

Does clean code make that much of a difference?

Again, the answer is yes. By externalizing any code you can and cleaning up things like tables you can greatly improve your site.

First, externalizing JavaScript and CSS helps reduce code bloat which makes the visible text more important. Your keyword density goes up which makes the page more authoritative.

Similarly, minimizing the use of tables also helps reduce the HTML to text ratio, making the text that much more important.

Also, as a tip, your visible text should appear as close to the top of your HTML code as possible. Sometimes this is difficult, however, as elements like top and left navigation appear first in the HTML. If this is the case, consider using CSS to reposition the text and those elements appropriately.

Do Keywords in the domain name harm or help you?

The short answer is neither. However too many keywords in a domain can set off flags for review. In other words blue-widgets.com won’t hurt you but discount-and-cheap-blue-and-red-widgets.com will likely raise flags and trigger a review.

Page naming follows similar rules ‘ while you can use keywords as page names, it doesn’t necessarily help (as I mentioned above) further, long names can cause reviews which will delay indexing.

How many links should you have on your sitemap?

Google recommends 100 links per page.

While I’ve seen pages with more links get indexed, it appears that it takes much longer. In other words, the first 100 links will get indexed right away, however it can take a few more months for Google to identify and follow any links greater than 100.

If your site is larger than 100 pages (as many are today) consider splitting up your sitemap into multiple pages which interlink with each other, or create a directory structure within your sitemap. This way you can have multiple sitemaps that are logically organized and will allow for complete indexing of your site.

Can Googlebot follow links in Flash or JavaScript

While Googlebot can identify links in JavaScript, it cannot follow those links. Nor can it follow links in Flash.

Therefore I recommend having your links elsewhere on the page. It is OK to have links in flash or JavaScript but you need to account for the crawlers not finding them. Therefore the use of a sitemap can help get those links found and crawled.

As alternatives I know there are menus which use JavaScript and CSS to output a very similar looking navigation system to what you commonly see with JavaScript navigation yet uses static hyperlinks which crawlers can follow. Therefore do a little research and you should be able to find a spiderable alternative to whatever type of navigation your site currently has.

Overall, while I didn’t learn anything earth shattering, it was good to get validation “from the horses mouth so to speak.

I guess it just goes to show you that there is enough information out there on the forums and blogs. The question becomes determine which of that information is valid and which isn’t. But that, I’m afraid, usually comes with time and experience.