2010 十大科技趨勢 雲端大紅

顧能發表明(2010)年全球前10大企業科技趨勢預測,近年逐漸受企業重視的雲端運算(Cloud Computing)商機、綠色資訊科技(IT for Green),以及風瀰全球諸如Facebook之類的社交運算(Social Computing),以及蘋果以iPhone創造出的行動電話應用程式(Mobile Application)商業模式等均榜上有名。顧能指出,這些技術將在未來三年內造成重大影響。 顧能每年發表年度科技趨勢預期,其評選出的科技,通常被業界視為未來幾年資訊科技產業的重要走向,是科技業做年度決策參考指標之一。 顧能今年提出的前十大趨勢為:雲端運算、進階分析(Advanced Analytics)、客端運算(Client Computing)、綠色IT、重塑資料中心(Reshaping the Data Center)、社交運算、安全監視(Security-Activity Monitoring)、快閃記憶體(Flash Memory)、可用性虛擬化(Virtualization for Availability)、行動應用軟體。顧能認為,這十大科技將在未來三年內對企業造成重大影響。 顧能分析師Dave Clearly表示,PC相關廠商無不積極趕搭雲端運算風潮,這種現象值得認真看待。顧能看好,雲端資源有助縮減IT成本。顧能表示,雲端服務從最陽春的亞馬遜Web Services,到更精緻的Google App Engine,都屬於雲端商機的範疇,企業應評估最適合其發展的形式。 有趣的是,近來風瀰全球的Facebook也羅列在十大趨勢之中。Facebook以及Twitter這類網路應用,即是顧能指出的社交運算。顧能表示,Social Computing包括企業內部社交網路、與顧客的互動等,企業需要了解社交網路的運作方式,和潛在的商業應用。 在快閃記憶體方面,顧能認為,快閃記憶體價格下滑,可望驅動其在未來幾年內以超過100%的複合年增率(CAGR)增長,會被消費者裝置、娛樂設備以及其他內嵌式IT系統等產品大量採用。 此外,綠色科技、進階分析、社交運算等,連續兩年入榜,而蘋果創造出來的行動電話結合軟體應用程式的商業模式,則在今年新上榜,預料未來市場規模將逐漸壯大。

工商時報【楊玟欣/台北報導】10/22/2009

Google Ad Planner

Google Ad Planner 是 Google 2008年6月24號發佈的一個免費的媒體計劃工作,它能夠說明廣告主或廣告代理商通過定義廣告目的受眾的人口統計特徵和興趣愛好後,快速找出這些受眾有可能訪問的網站,從而使得他們借助這些訊息能做出更加明智的廣告投放決策。

1、Google Ad Planner 提供的資料資料

  • 通 過 Google Ad Planner,會員幾乎可以查詢到所有網站的諸多敏感資料。大至 Yahoo、百度、新浪、搜狐、QQ、網易等大牌網站,小至一些個人部落格,比如本站。不過 Google 不提供它自身公司旗下網站的資料,比如查詢 Google、Youtube,都是看不到資料的。
  • 可以查詢到的敏感封包括:網站流量(UV、PV、到達率、總訪問量、人均訪問次數、人均停留時間)、訪客特徵(性別、年齡、教育程度、家庭收入)、網站分類、廣告訊息(支援的廣告類型、日平均展現)等,資料來源於40多個國家以30天作為一個統計區間每30天更新一次
  • Google 認為因為這些資料是基於海量的查詢和網站資料而估算出來的,已經精確到了可以直接拿來供廣告決策使用的程度。

2、Google Ad Planner 的資料來源猜測

Google 對 Google Ad Planner 資料來源的官方解釋中內含:

  • 整合過的 Google 搜尋資料
  • Google Analytics 的匿名資料
  • 其他外部的消費者面板資料
  • 第三方市場研究機構的資料
  • Google AdSense 的資料(官方解釋中沒有提及,但是說明中提到);

除了上述來源外,網友猜測的其他來源還有:

  • Google Toolbar 的監控資料(參見:TechCrunchsearch engine land
  • Google 各服務的會員帳號資料(參見:月光部落格

Danny Sullivan 還特地就 Google Ad Planner 是否使用了 Toolbar 的資料向 Google 求證,得到的答覆是無可奉告,因為這是 Google 不能說的密碼(secret sauce),不過 Danny Sullivan 因此更加確信 Google Ad Planner 用了 Toolbar 的資料。

3、對 Google Ad Planner 具體資料的來源分析

下面結合已有資料,我對 Google Ad Planner 的資料來源進行一個比較主觀的分析,看看 Google Ad Planner 提供的具體資料大概是怎麼來的。

網站流量

  • 將網站按照是否使用了 Google Analytics 分為兩類:
    • 使用了 Google Analytics 的網站(GA 網站)
    • 百度、Yahoo 等不會使用 Google Analytics 的網站(NGA 網站)
  • 從 Toolbar 的監控資料中可以得到大部分網站的初步流量(TB 值)
  • 從大量設定了「資料共享」的 Google Analytics 帳戶中可以獲得 GA 網站的 GA 訪問統計資料(GA 值,benchmarking data)。然後與它們的 TB 值進行對比,可以得到真實流量與初步流量兩者大概的比值(GATB 值);
  • 根據 TB 值和 GATB 值,Google 可以估算出 NGA 網站的真實流量。

若果上述假設成立的話,Google Ad Planner 的網站流量資料主要來源於:

  • Google Analytics 的匿名資料
  • Google Toolbar 的監控資料

訪客特徵

Google Ad Planner 說明中說明目前訪客的人口統計資料來源內含:

  • 第三方市場研究機構的資料
  • 其他外部的消費者面板資料

鑒於目前 Google Ad Planner 確實只能提供來自美國的訪客特徵,我覺得上述說法是可信的。月光部落格發現「中 文網站統計中,以海外網站較多,特別是北美的中文網站很多(裡面很多網站從國內無法訪問),而國內的很少」,出現這樣的怪現象就是因為 Google Ad Planner 目前只包括美國的訪客特徵資料的緣故,用美國的訪客特徵資料去篩他們可能訪問的中文網站,出來的自然是北美會員比較喜歡上的而往往已經被 GFW 和諧了的那些網站。

網站分類

Google Ad Planner 說明中說明網站的分類屬性主要是系統自動建立的,修正資料來源於

  • Google Analytics 的匿名資料

這個說法比較可信,不過比較搞笑的是百度被分類為「Government & Regulatory Bodies」。

廣告訊息

關於網站支援的廣告類型,日平均展現等資料,毫無懸念應該來源於:

  • Google AdSense 的資料
  • Reference from: http://por.tw/seo/rewrite.php/read-73.html

    Use YouTube to drive awareness and conversions

    If you’d like to expand your advertising to YouTube, consider using YouTube Promoted Videos. You can make a video about your business and then promote that video alongside relevant YouTube search results. Similar to AdWords, YouTube Promoted Videos lets you decide where you’d like your videos to appear on YouTube, place bids in an automated online auction, and set daily spending budgets.

    Last week we launched “Call-to-Action,” a new feature that allows you to add a clickable overlay to your Promoted Videos. This allows you to drive viewers to a website off-YouTube, which can help you drive more conversions and generate engaged, well-targeted traffic for your brand or product.

    You can learn more about this new conversion feature of by visiting the recently launched YouTube Biz Blog. To start using Promoted Videos today, visit ads.youtube.com.

    Posted by Amanda Kelly, Inside AdWords crew Monday, July 06, 2009 at 1:12 PM

    Reference from; http://adwords.blogspot.com/2009/07/use-youtube-to-drive-awareness-and.html

    New parameter handling tool helps with duplicate content issues

    Duplicate content has been a hot topic among webmasters and our blog for over three years. One of our first posts on the subject came out in December of ’06, and our most recent post was last week. Over the past three years, we’ve been providing tools and tips to help webmasters control which URLs we crawl and index, including a) use of 301 redirects, b) www vs. non-www preferred domain setting, c) change of address option, and d) rel=”canonical”.
    We’re happy to announce another feature to assist with managing duplicate content: parameter handling. Parameter handling allows you to view which parameters Google believes should be ignored or not ignored at crawl time, and to overwrite our suggestions if necessary.
    Let’s take our old example of a site selling Swedish fish. Imagine that your preferred version of the URL and its content looks like this:
    http://www.example.com/product.php?item=swedish-fish
    However, you may also serve the same content on different URLs depending on how the user navigates around your site, or your content management system may embed parameters such as sessionid:
    http://www.example.com/product.php?item=swedish-fish&category=gummy-candy
    http://www.example.com/product.php?item=swedish-fish&trackingid=1234&sessionid=5678
    With the “Parameter Handling” setting, you can now provide suggestions to our crawler to ignore the parameters category, trackingid, and sessionid. If we take your suggestion into account, the net result will be a more efficient crawl of your site, and fewer duplicate URLs.
    Since we launched the feature, here are some popular questions that have come up:
    Are the suggestions provided a hint or a directive?
    Your suggestions are considered hints. We’ll do our best to take them into account; however, there may be cases when the provided suggestions may do more harm than good for a site.
    When do I use parameter handling vs rel=”canonical”?
    rel=”canonical” is a great tool to manage duplicate content issues, and has had huge adoption. The differences between the two options are:
    • rel=”canonical” has to be put on each page, whereas parameter handling is set at the host level
    • rel=”canonical” is respected by many search engines, whereas parameter handling suggestions are only provided to Google
    Use which option works best for you; it’s fine to use both if you want to be very thorough.
    As always, your feedback on our new feature is appreciated.
     

    Reunifying duplicate content on your website

    Tuesday, October 06, 2009 at 3:14 PM

    Handling duplicate content within your own website can be a big challenge. Websites grow; features get added, changed and removed; content comes—content goes. Over time, many websites collect systematic cruft in the form of multiple URLs that return the same contents. Having duplicate content on your website is generally not problematic, though it can make it harder for search engines to crawl and index the content. Also, PageRank and similar information found via incoming links can get diffused across pages we aren’t currently recognizing as duplicates, potentially making your preferred version of the page rank lower in Google.

    Steps for dealing with duplicate content within your website

    1. Recognize duplicate content on your website.
      The first and most important step is to recognize duplicate content on your website. A simple way to do this is to take a unique text snippet from a page and to search for it, limiting the results to pages from your own website by using a site:query in Google. Multiple results for the same content show duplication you can investigate.
    2. Determine your preferred URLs.
      Before fixing duplicate content issues, you’ll have to determine your preferred URL structure. Which URL would you prefer to use for that piece of content?
    3. Be consistent within your website.
      Once you’ve chosen your preferred URLs, make sure to use them in all possible locations within your website (including in your Sitemap file).
    4. Apply 301 permanent redirects where necessary and possible.
      If you can, redirect duplicate URLs to your preferred URLs using a 301 response code. This helps users and search engines find your preferred URLs should they visit the duplicate URLs. If your site is available on several domain names, pick one and use the 301 redirect appropriately from the others, making sure to forward to the right specific page, not just the root of the domain. If you support both www and non-www host names, pick one, use the preferred domain setting in Webmaster Tools, and redirect appropriately.
    5. Implement the rel=”canonical” link element on your pages where you can.
      Where 301 redirects are not possible, the rel=”canonical” link element can give us a better understanding of your site and of your preferred URLs. The use of this link element is also supported by major search engines such as Ask.comBing and Yahoo!.
    6. Use the URL parameter handling tool in Google Webmaster Tools where possible.
      If some or all of your website’s duplicate content comes from URLs with query parameters, this tool can help you to notify us of important and irrelevant parameters within your URLs. More information about this tool can be found in our announcement blog post.

    What about the robots.txt file?

    One item which is missing from this list is disallowing crawling of duplicate content with your robots.txt file. We now recommend not blocking access to duplicate content on your website, whether with a robots.txt file or other methods. Instead, use the rel=”canonical” link element, the URL parameter handling tool, or 301 redirects. If access to duplicate content is entirely blocked, search engines effectively have to treat those URLs as separate, unique pages since they cannot know that they’re actually just different URLs for the same content. A better solution is to allow them to be crawled, but clearly mark them as duplicate using one of our recommended methods. If you allow us to crawl these URLs, Googlebot will learn rules to identify duplicates just by looking at the URL and should largely avoid unnecessary recrawls in any case. In cases where duplicate content still leads to us crawling too much of your website, you can also adjust the crawl rate setting in Webmaster Tools.

    We hope these methods will help you to master the duplicate content on your website! Information about duplicate content in general can also be found in our Help Center. Should you have any questions, feel free to join the discussion in our Webmaster Help Forum.

    Reference from: http://googlewebmastercentral.blogspot.com/2009/10/reunifying-duplicate-content-on-your.html