Newer posts are loading.
You are at the newest post.
Click here to check if anything new just came in.

April 23 2011

Search Notes: Search and privacy and writing robots

This week, we continue looking at search privacy issues and at the ongoing battle between Google, Bing, and Yahoo. Oh, and writing robots — we'll look at those, too.

Privacy and tracking issues

Searchers don't often think about privacy, but governments certainly do, and over time, search engines have had to balance gathering as much data as possible to improve search results and concerns about privacy. In 2008, Yahoo was very vocal about their policy of only retaining data for 90 days. Now, they've changed that policy. They'll keep raw search log data for 18 months and "have gone back to the drawing board" regarding other log file data.

Microsoft and Google keep search logs for 18 months and Yahoo may have found that keeping this data for a shorter period of time put them at a competitive disadvantage. In the new book "In the Plex," Steven Levy talks about how important Google found search data to be early on.

The search behavior of users, captured and encapsulated in the logs that could be analyzed and mined, would make Google the ultimate learning machine ... Over the years, Google would make the data in its logs the key to evolving its search engine. It would also use those data on virtually every other product the company would develop.

Perhaps that's why Google hasn't added the new "do not track" header to Chrome. The data is too valuable to provide encouragement for users to opt out.

Firefox tracking
Firefox 4 includes a no tracking option. Whether sites choose to accept this is another matter.

Although, as security researcher Christopher Soghoian said to Wired:

"The opt-out cookies and their plug-in are not aimed at consumers. They are aimed at policy makers. Their purpose is to give them something to talk about when they get called in front of Congress. No one is using this plug-in and they don't expect anyone to use it."

And as the Wired article notes, the header doesn't mean much at the moment as companies aren't using it and legislation doesn't require them to.

Bing continues to gain search share

Last week, I noted that Bing was slowly gaining search share in the United States. This week, the Bing UK blog said that they are gaining share in the UK as well. Of course, the gain between February and March of 2011 was only .28% and Google is still at 90% share, but hey, Bing will take what they can get.

Yahoo reports revenue declines

On Search Engine Land, Danny Sullivan has a great article digging into the details of Yahoo's second quarter earnings. Yahoo is blaming the revenue decline on the new partnership with Microsoft, but the article points out that the explanation isn't as easy as that, and in fact, revenue began declining long before the switch was made.

Can robots write better content than humans?

In recent weeks, Google has been in the news for tweaking its algorithms to better rank sites with unique, high-quality content rather than pages from "content farms." But in some cases, can machines write higher quality stories than people? A recent NPR story recounts a journalism face off between a robot journalist and a human journalist ... and the robot won. Certainly, algorithms are great at data extraction and in some cases, at presenting that data. But we probably don't want machines to take over the analysis, do we?



Related:


March 30 2011

Search Notes: The future of advertising could get really personal

This week, we imagine the future of advertising as we think about how much can really be tracked about us, including what we watch, our chats with our friends, and if we buy a lot of bacon.

Google expands its predictions

Search engines such as Google have an amazing amount of data, both in general (they do store the entire web, after all) and about what we search for (in aggregate, regionally, and categorized in all kinds of segments). In 2009, Google published a fascinating paper about predictions based on search data. The company has made use of this data in kinds of ways, such forecasting flu trends and predicting the impact of the Gulf oil spill on Florida tourism.

You can see the forecasted interest for all kinds of things using Google Insights for Search. Own a gardening web site? You might want to know that people are going to be looking for information on planting bulbs in April and October.

Web Search Interest: planting bulbs
Click to enlarge

Those predictions are all based on search data, but search engines can do similar things with data from websites. Google is now predicting answers to searches using its Google Squared technology. Want to know the release date of a movie or video game? Just ask Google. A Google spokesperson said this feature is for any type of query as long as they have "enough high quality sites corroborating the answer."

Movie guess
Click to enlarge

Yahoo and Bing evolve the search experience

We hear a lot about Google's experiments with changes in the user experience of search, but the other major search engines are changing as well.

When Yahoo replaced their search engine with Bing's, they said they would continue to innovate the search experience. The most recent change they've made is with Search Direct, which is similar to Google's instant search but includes rich media and advertising directly in a dropdown box.

Bing also continues to revise their user interface, the latest being tweets shown on the Bing news search results page (in a box called "public updates"). This is in addition to their "most recent" box.

Bing results
Click to enlarge

Search engines and social networks continue to change the face of advertising

Most of us don't spend much time thinking about the ads that appear next to Google search results, but search-based ads were an amazing transformation in advertising. For the first time, advertisers could target consumers who were looking for exactly what those advertisers had to offer. At scale. Want to target an audience looking to buy black waterproof boots? A snowboard roof rack for a 2007 Mini Cooper? A sparkly pink mini skirt? No problem!

Several years ago, Google introduced ads in Gmail that were intended to be contextually relevant to the email you were reading. This attempt was a bit more hit or miss. Contextual advertising is always going to be a bit less relevant than search advertising. If I'm searching for "best hiking gear," I'm likely looking to buy some. If I'm reading an article in the New York Times about hiking trails in Vermont, I might just be filling time while I wait in line to renew my driver's license. And matching advertising to email is even harder. I might open an email about hiking and wonder how I got on an outdoor mailing list.

For Gmail ads, Google is now looking to use additional signals about how you interact with your mail beyond just the content of the message. They noted that when working on the Priority Inbox feature, they found that signals that determined what mail was important could also potentially be used to figure out what types of ads you might be most interested in.

For example, if you've recently received a lot of messages about photography or cameras, a deal from a local camera store might be interesting. On the other hand if you've reported these messages as spam, you probably don't want to see that deal.

Facebook is also looking to show us ads based on conversations we're having online. This type of advertising has been available in a more general way on Facebook for some time, but this newest test shows ads based on posts in real time. AdAge's description of it sounds like it hits upon the core reason search ads are so effective:

The moment between a potential customer expressing a desire and deciding on how to fulfill that desire is an advertiser sweet spot, and the real-time ad model puts advertisers in front of a user at that very delicate, decisive moment.

Simply showing better ads in email and next to conversations in social networks is one thing, but the more interesting idea is how this idea can be used more broadly. Advertising has always provided the profit for most media (television, newspapers, websites) and innovation as we saw with the original search ads is critical in thinking through the future of journalism.

A breakthrough that makes advertising in online versions of videos more successful than commercials on television could be key in the transition of television to online viewing. Americans engaged in 5 billion online video viewing sessions in February 2011. We watched 3.8 billion ads, but if you are like me and watch a lot of Hulu (and many of you are, as Hulu served more video ads than anyone else), you might wonder if all of those ad views were of the same PSA.

Part of why mainstream advertisers haven't taken the leap from traditional television commercials to video ads is that TV commercials are tried and true. Why transition away from that? A good motivator would be an entirely new ad platform that takes real advantage of the online medium. (In the future, perhaps awebcam will track our facial expressions and use that data to stop showing us that annoying commercial!)

Ad platforms have been evolving use of behavioral targeting for a while, but it's still early days. As for the changes in Gmail ads, it will be interesting to see if the types of email we get one day is part of the personalization algorithm for our search (and search ad) results and if what kinds of email lists we subscribe to and what types of things we search for impact the video ads we see on YouTube.

Add to that the predictive elements of search and that organizations such as Rapleaf can tie our email addresses to what we buy at the grocery store (Googlers drink a lot of Mountain Dew and snack on Dorritos ... and bacon) and it's pretty clear that radical shifts in personalized advertising are likely not too far away.

Google still the top place to work

One in four job applicants wants to work at Google. That's nearly twice the number who want to work at Apple. The top write-in company (a list of 150 was offered in the study) was Facebook, followed by the Department of Homeland Security. No, I don't know why either.

Google was also named the top brand of 2011. So,despite their legal woes, consumers and potential employees are still fans.



Related:


March 24 2011

Search Notes: Google and government scrutiny

This week's column explores the latest in how we access information online and how the courts and governments are weighing in.

Google continues to be one of the primary ways we navigate the web

GoogleA recent Citi report using comScore data is yet the latest that illustrates how much we use Google to find information online.

The report found that Google is the top source of traffic for 74% of the 35 properties analyzed and that Google traffic has remained steady or increased for 69% of them.

However, it was a slightly different picture for media sites, as many saw less traffic from Google and more traffic from Facebook.

Also, a recent Pew study found that for the 24% of Americans who get most of their political news from the internet, Google comes in third at 13% (after CNN and Yahoo).

More generally, 67% of Americans get most political news from TV and 27% rely on newspapers (the latter is down from 33% in 2002). This trend is what's being seen generally for media, as noted in a recent comprehensive study by Pew Research Center's Internet & American Life Project and Project for Excellence in Journalism, in partnership with the John S. and James L. Knight Foundation.

Google and governments, courts, and other legal entanglements

Google's mission is to "organize the world's information and make it universally accessible and useful." Notice the use of the word "world" rather than "Internet." They're organizing our email, our voice mail, and the earth.

While having everything at our fingertips at a moment's notice is awesome, it also can make governments and courts nervous.

Case in point, the U.S. Senate is planning to hold an anti-trust investigation into Google's "dominance over Internet search" and their increasing competition with ecommerce sites.

Senator Herb Kohl noted that the "Internet continues to grow in importance to the national economy." He wants to look into allegations by websites that they "are being treated unfairly in search ranking, and in their ability to purchase search advertising."

Texas also recently filed an anti-trust lawsuit against Google, looking for access to information about how both organic and paid results are ranked.

Of course, if Google reveals too much, then their systems can be gamed. Searchers won't get the best results. Site owners would lose out too as the most relevant and useful result wouldn't appear at the top of results.

Why should we trust Google to rank results fairly? Ultimately, if they build a searcher experience that doesn't benefit the searcher, they could lose users and market share, so it's in their best interest to continue on their stated path.



"Right to be forgotten"


Another fairly recent case involves the Spanish courts. Google search simply indexes and ranks content that exists on the web. When something negative appears about a person or company, they will sometimes ask Google to remove it, but Google's stance is typically that the person or company has to work with the content owner to remove the content — Google just indexes what is public. (Exceptions to this exist.)

In Spain (and other parts of Europe), someone has "the right to be forgotten," but this doesn't apply to newspapers as they are protected by freedom of expression rules. Does it apply to Google's index of that newspaper content? Apparently, it's been ruled both that freedom of expression rules don't apply to search engines and that Google is a publisher and laws that apply to newspapers apply equally to Google.

A Spanish plastic surgeon wants Google to remove a negative newspaper article from 1991 from their search results (although he can't legally ask the newspaper itself to remove the article). The Wall Street Journal sums up the case this way:

The Spanish regulator says that in situations where having material included in search results leads to a massive disclosure of personal data, the individual concerned has the right to ask the search engine to remove it on privacy grounds. Google calls that censorship.

Google does remove content based on government requests when legally obligated to do so and it makes a summary of those requests available.

Sidenote to anyone upset about a negative newspaper article appearing in search results: It's probably a bad idea to try to bribe the journalist into taking the content down.



Google can't become the "Alexandria of out of-print books" quite yet


Google BooksSearch isn't the only area being scrutinized. Google has also been scanning the world's books and making them universally accessible. The courts justrejected a settlement between Google and the Authors Guild that created an opt-out model for authors. Neither Google nor the Authors Guild is happy. Authors Guild president Scott Turow said, "this Alexandria of out-of-print books appears lost at the moment."



Block any site from your Google search results


Since we all use Google to navigate the web, it makes sense that we want to be able to have our own personal Google and block the sites we don't like. Last month in this column, we talked about Google's chrome extension that enabled searchers to create a personal blocklist. Now this ability is open to everyone. Once you click on a listing and then return to the search results, the listing you clicked includes a "block all results" link. Click that and you'll never see results from that site again. You can manage this block list in your Google account.



Bye, AllTheWeb!


AllTheWebGoogle may seem unstoppable, but only a few years before Google launched, another search engine was dominant on the web. Alta Vista launched in late 1995 with innovative crawling technology that helped it gain vast popularity. Alta Vista later lost out to Google and was acquired by Yahoo. In late 2010, Yahoo announced they were closing down several properties, including Alta Vista.

That hasn't happened yet, but AllTheWeb, another of Yahoo's search properties is closing April 4th, at which time you'll be redirected to Yahoo. Alta Vista can't be far behind.




Related:


March 23 2011

In the future we'll be talking, not typing

Search algorithms thus far have relied on links to serve up relevant results, but as research in artificial intelligence (AI), natural language processing (NLP), and input methods continue to progress, search algorithms and techniques likely will adapt.

In the following interview, Stephan Spencer (@sspencer), co-author of "The Art of SEO" and a speaker at the upcoming Web 2.0 Expo, discusses how next-generation advancements will influence search and computing (hint: your keyboard may soon be obsolete).

What role will artificial intelligence play in the future of search?

Stephan SpencerStephan Spencer: I think more and more, it'll be an autonomous intelligence — and I say "autonomous intelligence" instead of "artificial intelligence" because it will no longer be artificial. Eventually, it will be just another life form. You won't be able to tell the difference between AI and a human being.

So, artificial intelligence will become autonomous intelligence, and it will transform the way that the search algorithms determine what is considered relevant and important. A human being can eyeball a web page and say: "This doesn't really look like a quality piece of content. There are things about it that just don't feel right." An AI would be able to make those kinds of determinations with much greater sophistication than a human being. When that happens, I think it will be transformative.

What does the future of search look like to you?

Stephan Spencer: I think we'll be talking to our computers more than typing on them. If you can ask questions and have a conversation with your computer — with the Internet, with Google — that's a much more efficient way of extracting information and learning.

The advent of the Linguistic User Interface (LUI) will be as transformative for humanity as the Graphical User Interface (GUI) was. Remember the days of typing in MS-DOS commands? That was horrible. We're going to think the same about typing on our computers in — who knows — five years' time?


Web 2.0 Expo San Francisco 2011, being held March 28-31, will examine key pieces of the digital economy and the ways you can use important ideas for your own success.

Save 20% on registration with the code WEBSF11RAD

In a "future of search" blog post you mentioned "Utility Fog." What is that?

Stephan Spencer: Utility Fog is theoretical at this point. It's a nanotechnology that will be feasible once we reach the phase of molecular nanotechnology — where nano machines can self-replicate. That changes the game completely.

Nano machines could swarm like bees and take any shape, color, or luminosity. They could, in effect, create a three-dimensional representation of an object, of a person, of an animal — you name it. That shape would be able to respond and react.

Specific to how this would affect search engines and use of the internet, I see it as the next stage: You would have a visual three-dimensional representation of the computer that you can interact with.




Related:


March 10 2011

Search Notes: The future is mobile. And self-driving cars

In the search world, the last week has been all about mobile.

Foursquare 3.0

FoursquareAt SMX West on Tuesday, Foursquare's Tristan Walker gave a keynote where he talked about expanding Foursquare as a customer loyalty and acquisition platform for business. To that end, they've launched new social and engagement features (just in time for SXSW!).

How is this related to search? Here's the key sentence from Foursquare's 3.0 announcement:

For years we've wanted to build a recommendation engine for the real world by turning all the check-ins and tips we've seen from you, your friends, and the larger foursquare community into personalized recommendations.

Foursquare's new "explore" tab lets you search for anything you want (from "coffee" to "80s music") and provides results based on all the information Foursquare has at its disposal, including places your friends have visited and the time of day.

Google is trying to get in this space with Latitude and Hotpot. After all, how can Google possibly hope to offer the same quality search results for "wifi coffee" without data about what kinds of coffee houses you and your friends frequent most often? This is personalization based on overall behavior, not just online behavior, and it's both fascinating and creepy to think about the logical next steps.

Unfortunately for Google, they missed a huge opportunity to get in on this space early when they acquired Dodgeball and effectively killed it, causing the founders to leave Google and start Foursquare.

Bing is also investing in mobile/local search, the latest being "local deals" on iPhone and Android (although not yet on Windows mobile).

Where 2.0: 2011, being held April 19-21 in Santa Clara, Calif., will explore the intersection of location technologies and trends in software development, business strategies, and marketing.

Save 25% on registration with the code WHR11RAD

Continued growth in mobile

According to discussion from a recent local online advertising conference, mobile advertising could become the dominant form of online advertising by 2015. About 5% of paid search is currently mobile, and that number could double by year's end. Google has about 98% mobile search share in the United States and 97% of mobile search spend.

Google says mobile search accounts for 15% of their total searches, distributed as follows:

  • 30% - restaurants
  • 17% - autos
  • 16% - consumer electronics
  • 15% - finance and insurance
  • 15% - beauty and personal

Continued discussion of Google's "content farm" update

As discussed last week, Google's algorithm change impacted 12% of queries and the talk about it has not died down. I wrote a diagnostic guide about analyzing data and creating an action plan and Google opened a thread in their discussion forum to get feedback from site owners.

Self-driving cars!

OK, maybe this isn't really search, except that it's coming from Google, but it's self-driving cars! We live in the future!

Search Engine Land's Danny Sullivan took some video at TED of the cars in action, including some footage inside an actual self-driving car.

Surely flying cars are next.

Got news?

News tips are always welcome, so please send them along.



Related:




March 04 2011

Computers are looking back at us

As researchers work to increase human-computer interactivity, the lines between real and digital worlds are blurring. Augmented reality (AR), just in its infant stage, may be set to explode. As the founders of Bubbli, a startup developing an AR iPhone app, said in a recent Silicon Valley Blog post by Barry Bazzell: "Once we understand reality through a camera lens ... the virtual and real become indistinguishable.'"

Eyetracking

Kevin Kelly, co-founder and senior maverick at Wired magazine, recently pointed out in a keynote speech at TOC 2011 that soon the computers we're looking at would be looking back at us (the image above is from Kelly's presentation).

"Soon" turns out to be now: Developers at Swedish company Tobii Technology have created 20 computers that are controlled by eye movement. Tom Simonite described the technology in a recent post for MIT's Technology Review:

The two cameras below the laptop's screen use infrared light to track a user's pupils. An infrared light source located next to the cameras lights up the user's face and creates a "glint" in the eyes that can be accurately tracked. The position of those points is used to create a 3-D model of the eyes that is used to calculate what part of the screen the user is looking at; the information is updated 40 times per second.

The exciting part here is a comment in the article by Barbara Barclay, general manager of Tobii North America:

We built this conceptual prototype to see how close we are to being ready to use eye tracking for the mass market ... We think it may be ready.

Tobii released the following video to explain the system:

This type of increased interaction has potential across industries — intelligent search, AR, personalization, recommendations, to name just a few channels. Search models built on interaction data gathered directly from the user could also augment the social aggregation that search engine companies are currently focused on. Engines could incorporate what you like with what you see and do.



Related:


March 03 2011

Startups get social with browser extensions

HeyStaksWajam.pngAs Google and Bing try to increase reach in a battle over who displays what social elements in search results, search engine add-ons are emerging to bring all social network results together.

Start-up HeyStaks Technologies launched its social search Firefox extension March 2 at the DEMO Spring 2011 conference. The extension works in tandem with established engines, displaying social results in a section above the regular search results. The product is built around "staks," which are described in a company press release:

Users can create "search staks," collections of the best Web pages from a group of users on a particular topic; these "staks" can be made public and easily shared with colleagues and friends via email, Twitter, etc., or kept private or shared on an invite-only basis.

The product provides an effective solution for users who share a common goal or shared interest, allowing them to search the web in a collaborative fashion using mainstream search engines, to make their searches much more effective by keeping the content relevance of results high.

Example of a HeyStak stak
An example of a HeyStak "search stak"

Another startup, Wajam, also recently launched a search extension. Wajam scours networks like Facebook, Twitter and Delicious and results are displayed horizontally across the top of search results.

Example of a Wajam result
A Wajam search result

The Wajam site says they're currently "oversubscribed," but several reviewers have mentioned if you send them a tweet (@wajam) requesting a subscription, or like them on Facebook, the wait isn't too long.

HeyStaks is available as a Firefox add-on and an iPhone app. The Wajam extension is available for Firefox, Chrome, Safari and Internet Explorer. Apps for the iPhone and Android are pending.



Related:




March 02 2011

March 01 2011

February 25 2011

Smaller search engines tap social platforms

BuzzFeed.pngAs the major search engines work to integrate social components into their search algorithms, it's interesting to also see niche engines tapping those same social networks for targeted results.

BuzzFeed, for example, recently launched its Pop Culture Search Engine to search pop-culture memes. It searches in a viral sort of way — the more "buzz" a story gets on social media platforms, the more likely it is to appear in the results. They also use this traffic indicator as a basis for isolating quality content.

Foodily is another targeted engine. It aggregates recipes from around the web and integrates the information with your friends' comments, recommendations, tips and recipes from Facebook. This approach creates more of a community environment for foodies, setting it apart from straight-up recipe search engines such as those on Cooking.com, Epicurious, or Food & Wine. Foodily can also search for recipes that don't contain certain ingredients. If you're allergic to garlic or out of milk, this feature might come in handy. (Note: Google's new Recipe View also allows you to select ingredients.)

February 24 2011

Buy where you shop gets a little easier

SearchReviews.jpgConsulting crowdsourced product reviews has become a common step in many consumer shopping processes. The practice, though good for the consumer, might not be so great for the brick-and-mortar retailer. If a customer comes into a store to touch and inspect products, then returns home to compare reviews before making a final selection, an online purchase becomes far more likely than the customer returning to the store.

A new app by Search Reviews may alter these shopping habits. Consumers can install the free app on a smartphone and use it to scan in bar codes from any of more than four million products. The app then shows a list of aggregated consumer reviews.

As Alexia Tsotsis from TechCrunch points out, the app has a ways to go in terms of its ability to sort by date or store, and it doesn't yet incorporate geolocation. But having aggregated product reviews in hand could encourage more of those buy-where-you-shop consumer experiences.

February 23 2011

Social search gets closer to home

As search engines like Google and Bing scramble to best one another in developing algorithms to allow social media results to appear in internet searches, another company has taken the social search concept a little more personally.

Greplin.png

Greplin has just come out of beta testing — and landed $4 million in additional venture capital from Sequoia Capital. The service searches social media closer to home: your Gmail, Facebook, LinkedIn, Google Docs and Calendar, Twitter and Dropbox accounts.

As we connect with more and more people on more and more platforms, this sort of search can come in handy. If one were to, say, misplace a link from one's editor (I will not confirm nor deny) and that link could have been sent over any one of the above communication portals, it could take quite a long time to manually hunt it down. Having a search tool to locate it in seconds (it didn't take more than two seconds, actually) can bring a whole new level of efficiency to the work day.

Greplin's basic service is free, but there are fee-based upgrades that offer more indexing space and access to additional search parameters, such as Google Apps, Yammer and Evernote.

5 assumptions about social search

As any comp sci major knows, data is nothing more than a pile of facts. It takes meaning to turn data into useful, meaningful, actionable information.

And applying meaning to a pile of data is exactly what's behind the recent partnership between Facebook and Microsoft's Bing. The idea is to use social information to select the most relevant search results from the staggering data pile that is the Internet.

Instead of assigning relevance to a given web page the old-fashioned way — because of, say, how many hits it received, or how many times your search phrase occurs in its tags, or how much the page's owner paid to have it appear at the top of the search result stack — the Facebook-Bing partnership aims to use a new relevance factor: What your friends like.

The reasoning goes like this: Most of us turn to our friends for recommendations when we want to hire a plumber or buy tickets to a play. By tapping into the websites our Facebook friends have surfed to and "Liked," Facebook and Microsoft hope to be able to serve up search results that are more meaningful. Rating content based on what other people think isn't new, of course. Amazon has been doing it for years. The thing that makes social search different isn't just that it attempts to rate content — it's that it attempts to rate content based on the opinions of people you know and trust.

And therein lies the rub.

Social search makes five assumptions that may or may not turn out to be accurate:

1. That "Liking" something is relevant to all (or at least most) searches

If you're researching a mobile phone purchase, you might care what your pals think. If you're looking for the most reliable, comprehensive site for prescription drug interactions, "relevance" probably means something other than popularity.

2. That a single click can convey what someone actually meant when they "Liked" a site

The popularity of the "Like" button (currently on an estimated 2 million web pages) is due in large part to its ease of use. Just a quick click gives people the emotional satisfaction of getting to weigh in on something, of getting to make their voice heard. Trouble is, clicking doesn't really tell you a whole lot. Did your pal like the site design? The product being hawked on the site? The company? The picture of the celebrity endorser next to the "Like" button? Or did his cat jump on his desk and step on the mouse by accident? There's no way to know.

3. That you trust the people who clicked "Like" (or even know them)

It's possible to create multiple Facebook accounts even though they violate the terms of service, and some users are quite indiscriminate about accepting friend requests. As such, there’s a chance that some of your Facebook friends are duplicates — and that others are folks you’ve never met. Both of these factors can diminish the relevance of recommendations.

4. That the people companies say "Liked" something actually did click a "Like" button

If we're extending the real-world scenario ("I care what my ex-workout-buddy thinks about that pair of shoes I want to buy") we could do worse than consider the real-world case of the aluminum-siding salesman who insists that all of your friends and neighbors are already buying ("Come on, you're the last holdout on your block!"). In other words, it's not inconceivable that companies who pay a little extra may turn out to be "Liked" just a little more often by your Facebook pals than sites who don't pony up. After all, how many of us are going to contact all of our friends to confirm?

5. That people won't get weirded out by having their Facebook and web-surfing worlds collide in such a visible way

It's been a long time since industry pundits (and Facebook members) were up in arms over Connect, a Facebook technology that shared member information with third-party websites on an opt-out basis. The fact is that Facebook's partnership with Bing (and other websites) does the same potentially scary thing: It melds what you and your friends do on Facebook with everything else you and your friends do on the web. The jury's still out on whether the average Facebook member realizes that what he "Likes" today could show up in his boss' search results tomorrow — and if he'll care.


Whether or not social search turns out to be useful for the average web user, there's no question that it will be successful for Facebook — which is perhaps more to the point. Facebook's deal with Bing virtually guarantees mass adoption of the "Like" button. If getting a good search ranking requires webmasters to get their visitors to click their "Like" button, that's exactly what they'll do. And that's great news for Facebook, whether or not Joe Searcher ends up better off.



Related:




February 22 2011

Search Notes: Paid links don't pay off

Americans conducted 18.2 billions searches in December 210. We use search to research products, get advice about our health, find local businesses, and track down government resources. Search is a ubiquitous part of our lives, technology, and our business strategies.

With that in mind, I'll be offering up a weekly roundup of what's happening in the world of search. I'll primarily focus on unpaid (organic) search, not advertising — both what's happening with the search engines and with searcher behavior.

That said, here's what recently caught my eye in the search space.

J.C. Penney and Forbes run afoul of Google's guidelines

The core focus of Google's search team is to provide the most relevant and useful results possible to searchers. This effort is complex and includes all kinds of things, such as working to store a comprehensive index of the web, understanding searcher intent from just a few words in a query, and using a variety of signals to figure out what pages web users like the most.

One such value signal has historically been the linking structure of the web. A page that a lot of sites link to might be a pretty useful page. Understandably, links in ads are excluded from this value signal.

Google has published guidelines (as has Bing) that basically say that if sites try to manipulate the signals that Google uses in ranking sites, Google might remove those sites from their index or reduce their rankings.

Some of these tactics are obvious. We've all come across web spam — pages with no value at all — and it makes sense that search engines would remove these pages entirely. But then there are sites that are legitimate businesses and do have value. Google takes action when they find these sites are trying to manipulate the algorithms in order to preserve the integrity of the search results.

The New York Times published a story last weekend that outlined how JCPenney.com seemed to suspiciously rank number one in Google for every possible query. On closer investigation, it was found that the site had a lot of external links that had been brokered through a paid link network. These weren't advertising links. They were links intended to artificially increase PageRank (Google's calculation of value from the web's linking structure).

JC Penney search result
Screenshot of the JCPenney's organic search results.

Google took action and JCPenney.com plummeted in the search rankings. J.C. Penney promptly fired their search engine optimization agency. In a piece on Search Engine Land, I detailed what happened, and how to avoid a similar fate.

Just as that was cooling down, Forbes outed itself for being on the other side of a link scheme. Forbes.com was selling links for PageRank (rather than advertising) at least as far back as 2007. Google took action; Forbes fixed the situation. But then last week, someone from Forbes posted in the Google webmaster discussion forum that they'd received a notification from Google (Google sometimes sends these notifications to verified owners in webmaster tools) about "artificial or unnatural links on your site pointing to other sites that could be intended to manipulate PageRank." He posted because he was stumped as to what those links might be. TechCrunch soon pointed them out and Matt Cutts, head of webspam at Google, posted in the forum that the links identified in the TechCrunch article did indeed violate Google's guidelines as they apparently were links coming from a paid link network (different than the one used to buy links to JCPenney.com, but a similar idea). Cutts said:

If I could recommend a single post that discusses our policies against buying/selling links that pass PageRank, I would recommend [this]. That post discusses why we think paid links that pass PageRank are a bad idea and gives a timeline with pointers to posts that we've done in the past about this topic.

Cutts recommended that Forbes remove those paid links and then file a reconsideration request with Google.

Forbes then posted a statement on their own site that "there was a period of time in the past when Forbes did sell links through a partner. This is no longer the case, and we began removing those links late last year." They noted that some links still exist on the site, and that was an oversight.

The lesson? If you operate a business online and rely on search traffic as a primary acquisition method, make sure you have a good handle on everything that goes into operating a business online, including the guidelines published by the major search engines. If you hire an agency to help with this, make sure that agency is one that follows the guidelines. And be patient. Success from unpaid search may take longer when you don't use tricks to manipulate the search algorithms. The upside is that the search traffic you acquire won't be at risk of drying up at any moment.

Bing gains slight market share, redesign its toolbar

According to comScore, Bing went from 12% to 13.1% search market share in January (and Google dropped from 66.6% to 65.5%). All Bing-powered searches (including Yahoo), are at 24.4% share and Google-powered searches are at 69.4%. This is good news for Microsoft, who has been investing substantial resources in search.

Bing toolbar
Screenshot of the Bing toolbar

Bing also launched a new version of its toolbar that features dropdown elements that make it more like a portal than a toolbar. You can keep up with your stocks, get weather information, play games, and see Facebook activity right from the toolbar.

Bing's reasons for investing in such a fancy toolbar are likely two-fold. If you use the toolbar a lot, you're likely to search directly from it, and that means you're not using Google to search. And as became clear if you watched the Colbert Report a couple of weeks ago, search engines use toolbar data to gain insight on how people search. One data point Bing uses is clickstream information, includingwhat searchers click on when using Google. The more clickstream data Bing has, the more informed their search results can be.

Chrome's personal blocklist lets you block sites from Google search results

The community at Hacker News has been asking for a way to block sites from appearing in search results, and Google has delivered. Matt Cutts posted there that the feature was a direct result of a request from Hacker News. Just install the Chrome extension, click "block URL" under any search result, and you'll never see that page again.

Google adds more social signals to search results

If you specify your social networks in your Google profile, your search results will now show when those you're connected to have shared that content. For instance, pages that those you follow on Twitter have shared in tweets may get a rankings boost in your search results. This makes your search results not only more social, but also more personalized. It's yet another way that we all see different results.

Here's some more coverage on Google's inclusion of social signals:

Upload your own data to Google Public Data Explorer

Google's Public Data Explorer lets you access, analyze, and visualize large data sets. Now you can upload data sets of your own for sharing and visualization. Google is also looking for partnerships with "official providers" of public data for the directory. Anyone who is working with large datasets should check this out, as Google has made it easy to work with and it provides some great visualizations.

Got news?

News tips are always welcome, so please send them along.


February 18 2011

The future of search: What it is and what we hope to do

Radar and Bing are collaborating on an in-depth look at the future of search: the opportunities, the issues, and the way search will be shaped by social media, realtime updates, and other factors.

Our coverage will wander freely into all the avenues and companies that naturally pop up. It will be a wide-ranging and inclusive exploration with leaders in the space, including people from Google, Baidu, and all other related businesses.

As Sean Suchter notes in this video, the future of search is bigger than one company or one technology — its influence will spread across organizations, devices, and entire industries. That's what we aim to examine in this series.

Editorial

All of the content that appears in this project is overseen and vetted by Radar. We understand that association with a specific company leads to natural questions, but valuable and independent editorial is our priority.

If you've got questions, fire away in the comments or drop me a line.

February 15 2011

Older posts are this way If this message doesn't go away, click anywhere on the page to continue loading posts.
Could not load more posts
Maybe Soup is currently being updated? I'll try again automatically in a few seconds...
Just a second, loading more posts...
You've reached the end.

Don't be the product, buy the product!

Schweinderl