How Affiliate Marketing Works

From WikipediA:

Affiliate marketing is a type of performance-based marketing in which a business rewards one or more affiliates for each visitor or customer brought by the affiliate’s own marketing efforts. The industry has four core players: the merchant (also known as ‘retailer’ or ‘brand’), the network(that contains offers for the affiliate to choose from and also takes care of the payments), the publisher (also known as ‘the affiliate’), and the customer. The market has grown in complexity, resulting in the emergence of a secondary tier of players, including affiliate management agencies, super-affiliates and specialized third party vendors.

Ecosystem:

There are a number of participants in the affiliate marketing space.  To easily understand both where you fit and who you should work with, here is a simple overview of the ecosystem:

  • Merchant: The Merchant/Retailer/Brand is essentially the entity that is looking to acquire customers through affiliate marketing, and is paying, based on a fixed $ per lead/phone call, % of sale, per visits, etc, for a referral for a potential or actual customer.
  • Agency: As in traditional marketing, there are Affiliate Marketing agencies such as Gen 3 Marketing, Affiliate Traction, or OPM Pros.  They are traditional services organizations that perform a wide range of tasks, including: affiliate recruitment & relationship management, concept & design of advertising creatives, or promotion distribution.
  • Vendor:  A catch-all term for 3rd-party product or services to the affiliate industry, such as data feed transformations, analytics, or Email marketing.  A few examples include: GoldenCAN, Mercent, Google Analytics, or Omniture.
  • Network: The Middleman.  Affiliate networks provide both the technical infrastructure and the business logistics for affiliate marketing to operate.  Responsibilities include: 1) running and maintaining the tracking software that credits referrals from affiliates for actions, 2) recruitment for both merchants and affiliates, and 3) handling all of the financial details (tax forms & payments for affiliates and merchants).  Some examples include: Commission Junction, LinkShare, PepperJam, Affiliate Window, ShareASale, or ClixGalore.
  • Affiliate: The Publisher or Affiliate is the website that 1) finds customers, 2) qualifies customers, and 3) refers the customer to the Merchant.  Common forms of affiliates include CSE (Comparison Shopping Engines), Blogs or other Content Publishers, Mailing lists, or SEM/Display Advertisers.
  • Customer: This is the person that 1) The Affiliate finds, and then refers to the Merchant, 2) Buys products/services from the Merchant, and 3) Pays or subsidizes the whole affiliate marketing system.

Great Infographic of the Ecosystem

Economics:

Now that we’ve covered the ecosystem participants, here’s a simple explanation of the economics behind each member.

  • Merchant: The merchant either buys or produces for $X dollars a product or service, and then sells it to the Customer for $Y dollars.  (Whether on an individual sale, or lifetime expected Gross Receipts/Gross Cost of Goods Sold basis for a customer).  The difference between the revenue ($Y) and the costs ($X) is the Gross Profit.  Subtract out the Cost of Customer Acquisition (the associated affiliate commissions), and you’re left with the Net Profit, which is what the Merchant ultimately makes, even if that’s a negative number.
    A quick note about Marketplace vs eCommerce Economics:

    • Marketplaces: Amazon, eBay, Buy.com, and Sears all operate “marketplaces”, where multiple Merchants can upload a list of products available to sell & ship.  At these Marketplaces, the Customer interacts directly with the Marketplace (Amazon/eBay/etc) to find their product, make their purchase, and process payment.  Nearly all customer contact details/payment information/purchase history/etc remains with the Marketplace,  and all future communication with Customers is limited to the transaction details.  In this scenario, the Merchant, for all intents and purposes, is more similar to drop shipper than an store or seller, profitability is simply calculated by revenue received from the marketplace minus your costs of fulfillment.
    • eCommerce Stores: This is your traditional branded eCommerce store & shopping cart software.  Customers navigate directly to the Merchant website, find products, create an account, make a purchase, and the Merchant processes the order.  One distinct difference between the Marketplace model and this one, is the customer BELONGS to the Merchant.  The Merchant is responsible for the entire shopping experience, and usually has the opportunity to market (eMail, phone, snail-mail, etc) to the Customer in the future.  If acquiring Customers through Affiliate Marketing, the initial cost is often referred to as a Cost of Customer Acquisition, since unlike the Marketplace model, Merchants have the opportunity for future marketing interactions and generating repeat orders without the initial Cost of Customer Acquisition.
  • Agency: Generally, most traditional Marketing Agencies charge clients a % of marketing spend.  In keeping with tradition, so do most Affiliate Marketing Agencies.  For example, if an affiliate marketing program generates $10k in affiliate commissions, a 5% agency fee to the Merchant is $500.  Agencies also have monthly minimums that may be higher than the service fee %.
  • Vendor:  Most often vendors will charge the Merchant either a flat rate for a service ($5k/month) or a variable based upon the actual usage (e.g. $5/per feed).  They also may have a monthly minimum.
  • Network: Here’s where it gets tricky.
    • Network Access Fee/Setup Costs: These fees range for $0 – $10k as a one time payment.  One network described their setup fee as follows: “The set up fee buys you into a network that not only offers an unsurpassed level of service and experience, but we are also leading the affiliate market in terms of the technology we offer. Our technology aims to provide a more targeted variety of creative linking methods, the tracking of which encourages publishers to work with us and increases your conversion rate.”
    • Minimum Deposit Amount: Generally, the Merchant’s account with the network acts similar to a Debit Card/Checking account.  There is usually an account minimum deposit such that a Merchant pre-pays commissions to the Network, from to which the Network deducts affiliate commissions.  This minimum deposit generally ranges from $0 to $2k.
    • Transaction fees/Override Amount: This is the true revenue stream for most Networks.  Often called a Transaction Fee or Override Amount, these fees are typically a  percent of Affiliate Commissions charged to the Merchant for use of the Network services.  This fee is generally between 20%-30% of Affiliate Commission.  For example, assume that the Affiliate Commission is 10% and the Override Amount is 20%.  With a Sales Amount is $100, the Commission payout is $10 to the Affiliate and the network would charge 20% of the $10, or $2 for the Override Amount, with total charge of $12 to the Merchant.
    • License Fee/Monthly Charge: Some networks charge an additional fee for the ongoing tools and customer support burden.  One network described their License Fee as: “This monthly cost enables access to all the contact tools, tracking, account support services, real-time sales reports, invoicing & payment services and technical support.”  Generally this is between $0 and $500/month.
    • Monthly Minimum: There is, commonly, a monthly minimum for the Transaction fees/Override Amount.  For Example, If the Transaction fees are 20% and there was $10k of gross sales generated, with a 10% affiliate commission, the amount due to the Affiliate Network would be $200 ($10k gross sales * 10% affiliate commission * 20% network transaction fees = $200).  If the Monthly minimum is $250, then the greater of the two (calculated $200 vs minimum $250 is charged).  Generally this amount ranges from $0 – $500 depending on the network.
  • Affiliate: The affiliate or publisher earns the Cost Per Action or Commission as defined by the Merchant affiliate agreement.  This is often a percent of the commission sale, % of reoccurring subscription revenue, or a flat amount per signup, form completion, or order.
  • Customer: Again, the customer is ultimately looking for the Merchant’s product or service, and the Affiliate finds the customer and refers them to the Merchant, where they make a purchase and the Affiliate gets compensated for the referral.

Technology: 

After understanding the aspects of how all of the business relationships work, the next step is to understand the mechanics of the implementations.  There are two basic implementations, one involves Cookies and the other involves back-end API development.  For the purpose of simplicity, I’ll just cover how the Cookie Based approach works, which is also the most common.

  • The Affiliate Network will generate a unique URL for each Affiliate for each and every Merchant.  In an example Commission Junction link: http://www.tkqlhce.com/click-3576795-10746577 the 3576795 represents the Affiliate, and the 10746577 represents both the Merchant and a specific advertising campaign the is being run.
  • A Customer will click/navigate to that URL and have a Cookie set saying the Affiliate 3576795 referred that customer to the Merchant.  The AdId (10746577) is also associated with a destination page (in this case http://shop.advanceautoparts.com/webapp/wcs/stores/servlet/home) to which the customer is redirected to.
  • The customer performs product searches, adds an item to the Shopping Cart, and continues to check out.
  • On the “Thank You” page, the Merchant has a pre-installed 1×1 transparent gif (to the network) that has an URL parameter such as productSales=100.00 that the Affiliate Network then uses to decide between whether 1) the purchase was made by a referral from a pre-existing Cookie for Affiliate 3576795 and then credits the account for the commission, or 2) doesn’t see a cookie and ignores the purchase.
Affiliate Marketing Workflow

Affiliate Marketing Workflow

Real Life

When merchants ask me about affiliate networks, they often want to know the details.  I can’t know perfectly but here are the fees as I understand them:

Network’s Name
Setup Fee
Deposit
Transaction Fee
Other
Affiliate Future $1,000 $0 30% of commission $100/mo access fee
AvantLink
$2,000
$500
3% of sale’s total
Affiliate Window $5,000  Unknown 30% of commission $500 monthly min
Commission Junction $3,000 30% of commission $500 monthly min
Google Affiliate Network (closing down)
$1,000
varies ($1,000-$5,000)
25% of commission $500 monthly min
LinkShare $1,000 $3,000 2-3% of sale’s total $500 monthly min
Pepperjam no data
ShareResults $500 $500 25% of commission
ShareASale
$550
$100
20% of commission
$25 monthly minimum
ClixGalore
$0
$75
25% of commission
$0 monthly minimum

I created an example Google Document with the simple calculations of Commission Rate Scenarios + Affiliate Network Rates: https://docs.google.com/spreadsheet/ccc?key=0Ai7IYcrzwmCMdDdIVUNqaEd3azVXOTdrSERFWjZGUHc&usp=sharing

Author Note: I consider this post a draft, and will need to add a lot more for it to be anywhere near comprehensive.  5/6 Edit: Some verbiage updates & inclusion of Marketplace vs eCommerce Store economics.

Scala @ Frugal Mechanic

scala_logo
Last night Tim gave a talk about Scala at the Seattle Tech Startups meeting.

From the Scala website:

Scala is a modern multi-paradigm programming language designed to express common programming patterns in a concise, elegant, and type-safe way.  It smoothly integrates features of object-oriented and functional languages.

At Frugal Mechanic we use Scala for all of our backend processing and for about 50% of our front end (the other 50% is Ruby on Rails).  If you’re interesting in learning more about Scala here are some resources to get you going:

The slides from his talk are available here or from SlideShare:

A Brief Intro to Scala

View more presentations from tpunder.

Building a Scrabble MMO in 48 hours

Screen-shot-2010-09-02-at-1.37.16-AM

Last weekend, I was lucky enough to be able to participate in the first Node Knockout. The rules are simple: you have 48 hours to code someone amazing using the Node.js platform. Planning ahead is allowed, however it has to be all on paper (no digital assets, designs, etc)

I’m not sure what it is about these weekend coding competitions, but they’ve always fascinated me. I love seeing real life demonstrations of the fact that technology has progressed to the point where you can go from a harebrained idea to deployed and functional product in a few intense dev days. Or maybe it’s because I remember my previous life as an ASP.NET developer where the first week of a project would barely cover the schema design and data access layer. Either way, it’s great for the Node.js community. If you want to convince the world that a language/framework is worth using, the best way is use it to build amazing stuff, amazingly fast.

Having also competed as a solo team in Rails Rumble the last few rounds with decent results, I had a good idea of what to expect. This time I was able to team up with some really talented local developers I met at MongoSeattle a few months ago: Grant Goodale, Damon Cortesi, and Aviel Ginzburg. I figured with a team of 4, we could really bite off something ambitious and might have a chance at actually completing it.

The Final Product

Scrabbly board

What we were able to build was a Massively Multiplayer Online Scrabble game. Tiles are placed in realtime, and each player competes with everyone else to build the longest chain of words and the highest score. Once you place a tile, it’s on the board forever and other players have to work around it. You can start your game on anyone’s tiles, but after you place your first word you will need to stick to your own tiles until you run out of lives (swapping tiles loses a life).

The Tools

Screen-shot-2010-09-02-at-1.29.01-AM

As soon as we came up with the idea, we knew exactly the right tool for this application. MongoDB geospacial indexing was a lifesaver here. We are able to store every tile thats placed on the board by its x,y coordinate and query a bounding box whenever someone changes their view. The resulting queries are ridiculously fast, even with a tile collection approaching over 90 thousand. Once we get too big for that, scaling is going to be dead simple given we can just use modular arithmetic to produce a shard key.

Screen-shot-2010-09-02-at-1.30.47-AM

For the web server, we ended up using ExpressJS since it seemed like the most mature out of the NodeJS web frameworks. As a Ruby/Rails developer (no real node/expressjs experience before), I mostly avoided this aspect of the application, and just worked with static html files, javascript, and css. From what I did use of ExpressJS, it felt very young. Works well if you do everything exactly right and are willing to do some trial and error, but don’t expect edge cases to work, or any kind of comprehensive documentation. Just the simple act of setting cookies, or performing a redirect took dev effort. Node’s philosophy seems to favor that you avoid abstracting too much. This was frustrating to begin with but I soon realized that I really should be knowing the internals of these mechanisms anyways. Using node will force you to become an expert in all things HTTP, which is not a bad thing at all.

Screen-shot-2010-09-02-at-1.31.54-AM

For the UI end of things, I figured I’d just stick to what I know. JQuery and JQuery UI handled the job just fine. I do have some issues with the codebase of JQuery UI, and it’s not very hackable, but it’s widely used and well documented. I felt I had to compromise a lot on the usability however and would have loved to rewrite the sortable component (for the Tile Rack) myself in order to make it work exactly as I wanted. But that just wasn’t possible in the timeframe we had.

Screen-shot-2010-09-02-at-1.33.00-AM

Hosting wise we used Heroku (with pre-alpha node support). MongoHQ handled the mongodb hosting for us. It all worked like a charm. We had no issues whatsoever while in development, and everything deployed smooth as butter. After we started getting hammered with traffic, the Heroku node does tend to go down a few times a day, requiring a restart but I’m sure they’ll get that fixed up once they hit their actual release. In the meantime, after the competition we’ll probably move our servers elsewhere.

The idea meetup (2 weeks before NodeKO)

Hosting wise we used Heroku (with pre-alpha node support). MongoHQ handled the mongodb hosting for us. It all worked like a charm. We had no issues whatsoever while in development, and everything deployed smooth as butter. After we started getting hammered with traffic, the Heroku node does tend to go down a few times a day, requiring a restart but I’m sure they’ll get that fixed up once they hit their actual release. In the meantime, after the competition we’ll probably move our servers elsewhere.

The idea meetup (2 weeks before NodeKO)

The first meeting we had was the idea session. I came prepared with my own idea (along with 3 pages of notes, use cases, and monetization strategies) for a variant on real time web chat. The team heard me out, but it promptly got rejected. It was too bland, and they figured everyone else would be doing it. And if we were going to do this for free in our spare time, they wanted to build something fun. I agreed as well, so we started thinking up game ideas.

As you can probably imagine, these were all over the place initially. Started off exploring the idea of building the card game “Set” (a real time card game). We decided we should build something everyone understands, and thought about doing an online Battleship. Then Scrabble. Then we started thinking about ways to combine them “You sank my Scrabbleship!”. After some additional debate, the idea of Scrabble on an infinite board started to seem plausible. An hour and a couple filled white boards later we had figured out what we wanted to do, along with some initial ideas for game mechanics.

We also came up with the name. We knew it wasn’t going to last, but Scrabb.ly seemed fun to use for the short term. Until the lawyers at Hasbro got wind of it of course.

The feasibility meetup (1 week before NodeKO)

After getting psyched up about the infinite Scrabble idea, we wanted to test out whether the game mechanics would actually work. What better way than to chop up and glue together 5 Scrabble boards. The result is pictured here:

Scrabble Boards

Sitting down and playing with 4 people together on one gigantic Scrabble board allowed us to tweak the rules a bit, and test if it would actually be fun to play in practice. We had a blast, which was proof enough to me that this thing was worth building.

Unfortunately we also found out that one of our team members (Damon) had an important client meeting the next weekend so we’d be down to 3 people on the project. This was a bummer, but we pressed on.

The UI meetup (1 day before NodeKO)

Right before the competition, we met up one more time in order to really nail down the final user interface design before the competiton started. This time we brought some graph paper and colored pencils then started drawing.

first_non_recoverable_shot

second_non_recoverable_shot

third_non_recoverable_shot

fourth_non_recoverable_shot

We were also able to snag meetings with a couple of the Seattle Techstar teams to walk them through the paper prototypes and get feedback on what made sense and what didn’t. This was incredibly useful to finalize all our intro copy, button labels, and the general user interaction of the site before the competition started. We now had reasonable confidence that most users would understand and be able to use our app once it was built.

Finally, before we left for the night, we came up with a gameplan for each of us on what tasks we need to do in what order.

notepad_1

notepad_2

We decided that if we were going to win this thing, we’d need to have one person (Aviel) fully focused on design for the entire duration of the competition. That turned out to be a great decision as it allowed us to attain a level of design polish and multiple iterations of design ideas that wouldn’t have been possible without a dedicated designer fully obsessing over every pixel.

The 48 hours of coding

Like all the other teams, we started as soon as the competition went underway (Friday at 5pm PST) and worked well into the night. Our team of 3 was able to work together at the Founder Coop offices in Seattle (where TechStars Seattle is located). We took over a conference room and made it our home for the weekend.

I stuck to my gameplan, went line by line, and got each UI feature functional. Grant worked solely on the backend data (mongodb and nodejs), and Aviel was able to focus on designing a gorgeous game board and tiles in Photoshop. Since our interface was just one screen, we were able to spend a lot of time on each little piece of it, and go through a few different design ideas.

11

After going all of Friday night into Saturday morning, we felt we made great progress. The infinite board was working, along with the dragging tiles back and forth between the rack and the board. And we had some nice initial tile and board design psds to integrate. After about 5 hours of sleep on Saturday, we started back up.

21

Adding Board Tiles and Basic Rack Panel

Adding Rack Tiles and Drag/Drop

Adding Rack Tiles and Drag/Drop

The Saturday night – Sunday morning was the toughest stretch of the competition by far. Low sleep, and lots of broken functionality definitely took a strain on motivation levels. After working through the night, on Sunday morning I was pretty much resigned to shipping a half broken piece of crap. The basic infinite board, tile placing, and design looked good, but the game didn’t work at all. We couldn’t validate placing a word onto the board, and we hadn’t even started on the push notifications. I toyed with the idea of cutting my losses and just going to sleep.

51

Adding Compass and Random Tile Generator

Screen-shot-2010-09-02-at-1.34.17-AM

Pressing on, we found small wins here and there, and finally got the word placing to work. Integration with push notifications from the server was handled by a 3rd party web service called PusherApp, and it took all of about 15 minutes to get up and functional. After those 2 big features were in, I had the motivation to actually finish the thing. On Sunday from noon to 3pm, we basically built the entire game system within Scrabb.ly (end turn, score submission, restart game, etc). It was rapid fire. We went though all the missing features, spent 20 – 30 minutes coding each one, and on to the next. The rapidly approaching deadline meant whatever didn’t get done in a short window of development had to be cut.

Adding Play Buttons and Logo Panel

Adding Play Buttons and Logo Panel

Adding the Score Panel, the leaderboard, and the jGrowl Alerts. Lightened up the Board colors.

Adding the Score Panel, the leaderboard, and the jGrowl Alerts. Lightened up the Board colors.

At 3pm on Sunday we were able to focus the next 2 hours purely on bug fixes and polish. Adding an intro screen, putting in a leaderboard, and fixing a ton of bugs with resolving word positions. The frantic code, checkin, deploy, and swearing lasted right up until minutes before the deadline. Unfortunately some bugs caused us to have to strip out the leaderboard as one of our last checkins. But at that point I couldn’t really get anything working right as I was running on fumes and could barely think straight.

After the 5pm deadline, we took some shots of whisky, drank a bottle of wine, and called it a night.

To get a feel for how frantic the dev activity was, here’s the full git commit log: http://gist.github.com/561680. Near the end of the competition I wasn’t even coherent enough to spell the swear words correctly in the commit messages.

Final released version!

Final released version!

Traffic Stats

In the 3 days since we’ve launched, we’ve managed to get almost 7 thousands unique visitors. Over 90 thousand individual Scrabble tiles have been placed on the board so far. Feedback has been fairly positive aside from the fact that the board has gotten way too large and it’s difficult to scroll far enough to find a place to start playing. We plan to iterate with some of these fixes and release a new version on Friday after competition judging ends.

Why we do it

Over 200 teams signed up for Node Knockout, and around 100 produced final judge-able products. The question is, why do we do it?

I think what competitions like this give is a structured way to “prove your chops” to the world. Having a successful entry, even if it doesn’t win, gets huge exposure from important people in the tech sector. Having a concrete example of your abilities in a public forum can mean a huge difference between a low hourly rate and a high one, the difference between a dream job with high salary and a a mediocre one. The tangible benefits of competing successfully can easily equate to tens of thousands of dollars over the course of year for a worthy competitor.

Another great benefit is the motivation that comes with working against the best and the brightest in the industry. You know right away you can’t be slacking off and have a chance. Judging from the results, this seems to help bring out the best in people. The high caliber of the final submitted apps support that.

A Month Later

Just over a month after the competition, Scrabb.ly has hit 2 million placed tiles and over a hundred thousand unique visitors. Traffic is growing every day through word of mouth, and we’ve had to switch web hosts twice just to handle the load (we’re now on Blue Box Group, for both our web and MongoDB servers).

We never thought our Node Knockout side project would get the response it has so far, but we’ve decided to keep it running and see just how ridiculously huge this Scrabble board can get!

———-

About Author: Jacques Crocker is a Ruby on Rails developer in Seattle.
Follow him on Twitter at @railsjedi

Whitelabel Analytics using Google Analytics

Whitelabel products can be seen anytime you walk into a grocery store.

Crispy_hexagons Whitelabel Product

Image Courtesy of Wikipedia

They’re becoming even more popular in online business models:

Vast Whitelabel

Tracking

Most whitelabels operate by either providing a back-end API, or in having a DNS CNAME entry pointing to the host.  In the case, for when you’re powering a whitelabel through a CNAME entry, usually the partner wants to have a way to track their stats. An easy way for them to do this is to include their Google Analytics code and then have them setup a Custom Segment to isolate the traffic to the whitelabel site. That setup is easily accomplished through the following instructions:

Creating Custom Reports

If you are only interested in statistics for an exact Web site, following the steps above to drill-down to your site becomes a time consuming process. You can follow the steps below to create a custom report that is already focused on one or more URLs. In Google Analytics, you can create an “advanced segment” to narrow down a report to basically any criteria.

  1. Log into Google Analytics
  2. Click “Advanced Segments” under “My Customizations” on the left menu.
  3. Click “Create New Custom Segment” on the top right of the screen.
  4. In the “type to filter” search box on the left menu type in “hostname” as one word.
  5. Drag the green hostname box over to the center column and on top of the dotted box labeled “dimension or metric”.
  6. Change the condition to “contains” in the drop down box.
  7. In the value box, type in your full URL of your web site minus any slashes, http://, www or anything else. Example: autoparts.yourdomain.com
  8. Down at the bottom of the form you need to give your new segment a name, for example call it “Entomology site traffic”.
  9. Click the button “create segment” to the right of the name box.

You have now created a new segment, which will be available at any point in the future from only your account. This segment can be applied to your reports by following the instructions below.

Viewing Custom Reports (Segments)

  1. Log into Google Analytics
  2. From any report or the Dashboard click down arrow in the drop down window next to “advanced segments” on the top right of the screen.
  3. By default “All Visits” will be checked, which is the default segment that shows all traffic and all sites. Under “Custom Segments” you will see the segments you created.
  4. If you wish to only view your segment, check it in the “Custom Segments” box and un-check “All Visits”. If you leave “All Visits” checked, it will overlay the reports to allow comparisons.
  5. Once you have selected your segment and unchecked “All Visits” you can click “Apply” below the “Custom Segments” menu.

You are now viewing all of the reports that are narrowed to to only this specific segment or Web site. You can verify the segment is selected by looking above the graph on any report and it should say the name of the segment you created.

Howto create “standardized” CSV exports from MySQL

When you start querying/analyzing/and wanting to export data from MySQL (especially anything free-form like SEO/SEM Keyword Data Extracts) you’ll quickly find out you can’t simply concatenate your columns together and join them with a delimiter to be read later.

Sampling some of our keywords illustrates a number of problems:

Quotes: ! 1/4″ inlet muffler

Newlines: xtra vision

headlight h4703xv hours

Control Characters: ^HToyota+starlet+coolant+temp+sensor, 242°/250° duration @ .050″ 601 lift

Commas/Tabs/Semi Colans or other typical delimiters: “+bosch, +part no., +67647″, 2500hd

chevy cat back exhaust

Unicode: ايسوزو 20011

ב.מ.וו 545i, 宝马m3冷却系统

The Original MySQL attempt to export this was:

SELECT
	`medium`,
	`keyword`,
	`landingpage`,
	SUM(`entrances`) as entrances
INTO OUTFILE 'keyword_data.csv'
	FIELDS TERMINATED BY ',' OPTIONALLY ENCLOSED BY '"'
	LINES TERMINATED BY '\r\n'
FROM
	`daily_medium_keyword_landingpage_visits`
WHERE
	medium = 'organic'
GROUP BY
	`medium`,
	`keyword`,
	`landingpage`;

This produced a really great file format, any crazy characters were escaped properly and it looked great on paper:

"organic","! 1/4\" inlet muffler","/cache.aspx?q=\"1+4\"+\"inlet+muffler\"&d=76309488474199&mkt=en-US&setlang=en-US&w=6d792683,38b10c05",1
"organic","!((^ toyota avalon mechanical lifter repairs","/auto-parts/toyota/engine-mechanical-valve-lifter",1
"organic","!(gl-508 rm) motorcraft","/auto-part/149373-motorcraft-gl508rm-remanufactured-alternator",1
"organic","!947 dodge power wagon","/auto-part/983439-mallory-ignition-wire-set-m11947",1
"organic","!967 ford mustang parts","/auto-parts/ford-mustang",1
"organic","!971 142 volvo specifications","/auto-parts/volvo-142/engine-electrical-distributor",1
"organic","!972 buick skylark heater core","/cars/1972-buick-skylark",1

I loaded up FasterCSV to start parsing the file and quickly figured out how naive I was about the CSV file formats. MySQL’s default use of \ as an escape character isn’t “standard.” It turns out there isn’t an official CSV standard but I did find something close: http://tools.ietf.org/html/rfc4180#section-2

Hmm so effectively none of the control characters should be escaped, but fields should be quoted if they have a delimeter in them, and a quote in the field should be replaced with “” to be a CSV Standard.

The other wrinkle is the standard calls for \r\n to be the line delimeter, but what happens if there are other \r\n in the columns?

First Step – Remove Escape Characters:

	FIELDS TERMINATED BY ',' OPTIONALLY ENCLOSED BY '"' ESCAPED BY ''

By adding the ESCAPED BY ”, none of the characters will be escaped in the output

Step Two – Replace \r\n with \n:

	REPLACE(`keyword`, "\r\n", "\n")

Step Three – Replace ” with “”

	REPLACE(REPLACE(`keyword`, "\r\n", "\n"), '"', '""')

Step Four – Replace NULL-Byte with a blank

	REPLACE(REPLACE(REPLACE(`keyword`, "\r\n", "\n"), '"', '""'), ""), char(0), "")

Step Five – Replace NULL field with a blank

	IFNULL(REPLACE(REPLACE(REPLACE(`keyword`, "\r\n", "\n"), '"', '""'), ""), char(0), ""), "")

Putting it all together:


SELECT
 IFNULL(REPLACE(REPLACE(REPLACE(`medium`, "\r\n", "\n"), '"', '""'), char(0), ""), "") as medium,
 IFNULL(REPLACE(REPLACE(REPLACE(`keyword`, "\r\n", "\n"), '"', '""'), char(0), ""), "") as keyword,
 IFNULL(REPLACE(REPLACE(REPLACE(`landingpage`, "\r\n", "\n"), '"', '""'), char(0), ""), "") as landingpage,
 SUM(IFNULL(`entrances`, 0)) as entrances
INTO OUTFILE 'keyword_data_new.csv'
 FIELDS TERMINATED BY ',' OPTIONALLY ENCLOSED BY '"' ESCAPED BY ''
 LINES TERMINATED BY '\r\n'
FROM `daily_medium_keyword_landingpage_visits`
WHERE medium = 'organic'
GROUP BY
 IFNULL(REPLACE(REPLACE(REPLACE(`medium`, "\r\n", "\n"), '"', '""'), char(0), ""), ""),
 IFNULL(REPLACE(REPLACE(REPLACE(`keyword`, "\r\n", "\n"), '"', '""'), char(0), ""), ""),
 IFNULL(REPLACE(REPLACE(REPLACE(`landingpage`, "\r\n", "\n"), '"', '""'), char(0), ""), "");

Put Google AdSense Into the Corner, Let your page render first using jQuery

First off, if you haven’t installed both Yahoo YSlow! and Google Page Speed then bookmark this page and come back to it later. Chances are you have bigger fish to fry on the rest of your website.

Now if you read my post on AB Testing with Google Website Optimizer, you probably caught on that <script type=”text/javascript”> tags block rendering of content on your pages.

Now that you’re using these tools, your noticing some of these AdSense calls are taking 300ms to render – sometimes multiple SECONDS…wait but when I combine those two facts that means my content below the ads isn’t rendering yet…

WTF Google!

The good news, is, it’s possible to unblock the rendering of your page’s content. The bad news, is, it involves repositioning the ads after the page has been loaded (a little bit of a flicker).

Solution:

The mechanics of the solution involve three pieces:

  1. A placeholder <div> where the ad should be displayed
  2. Shoving the actual ad elements as one of the last items in your HTML document (so everything else gets rendered first)
  3. jQuery code to reposition the ad back into its placeholder slot

Inline Content:

The container for your ads should be pretty straightforward, you know what size ad unit you’re including so block out the section of your page:

   <div>
      <div id="fmad_placeholder_Detail_Page_Bottom" style="height: 90px; line-height: 90px;">&nbsp;</div>
   </div>

Bottom Page Content:

Your moving div that actually calls AdSense/Ad Manager code:

<div id="fmads_Detail_Page_Bottom" style="display: none; text-align:center;">
  <script type="text/javascript">
   //<![CDATA[
     GA_googleFillSlot("Detail_Page_Bottom");
   //]]>
   </script>
</div>

jQuery Magic to reposition your ads back to where they should be:

<script type="text/javascript">
//<![CDATA[
  var fmPositionGoogleAdsInit = false;
  function fmPositionGoogleAd(ad_name) {
    var fun = (function () {
      var placeholder = $('#fmad_placeholder_'+ad_name);
      var ad = $('#fmads_'+ad_name);

      if(placeholder && ad) {
        var pos = placeholder.offset();
        ad.css({
          "display": "block",
          "position": "absolute",
          "left": pos.left+"px",
          "top": pos.top+"px",
          "width": placeholder.width()
        });
      }
    });

    if(!fmPositionGoogleAdsInit) {
      $(window).resize(fun);
      $(document).ready(fun);
    }
    fun();

    fmPositionGoogleAdsInit = true;
  };

  function fmPositionGoogleAds() {
    // List all of your ad slots
    fmPositionGoogleAd('Detail_Page_Bottom');
  };

  fmPositionGoogleAds();
//]]>
</script>

Gotchas:

Now, if you have any other dynamic content on the pages, you may have to manually reposition the ads again.  This is simply done by calling:

fmPositionGoogleAds();

Hope that helps!

Ruby Script to Extract Google Analytics Keywords, Landing Page, Medium Entrances to CSV/TSV format

I’ve had versions of this script hacked together before.  In fact, if you search for “Google Analytics Data Extractor” the #1 result is a google groups posting I made in a former life for a C# program I developed: http://groups.google.com/group/analytics-help-misc/browse_thread/thread/d2ad6ddf3d73e511

There’s now a couple of ruby gems out there for handling the Google Analytics API

One of them I’ve been doing some light-contributing to, including the pagination support (automatically grab all results in 10,000 result chunks)

I pushed a couple of changes up today at: http://github.com/er1c/gattica that hopefully will get merged into the main trunk soon (http://github.com/activenetwork/gattica)

One of the more useful uses of the library is to extract your Organic Search Results Keywords from Google Analytics. We’re using this at Frugal Mechanic to see how people search for Auto Parts.

require 'rubygems'
require 'gattica'
require 'fastercsv'

ga_profile = "" #Enter your Profile Here
start_date = Date.new(2009, 1, 1)
end_date = Date.new(2009, 12, 31)
file_path = "data" # Directory Needs to exist

puts "Google Username: "
u = gets.chomp
raise "bad username" unless !u.empty?
puts "Google Password: "
system "stty -echo"
pw = gets.chomp
system "stty echo"
raise "bad password" unless !pw.empty?

class ExtractKeywords
  def initialize(email,password)
    @gs = Gattica.new({:email => email, :password => password})
  end

  def get_accounts
    results = []
    @gs.accounts.each{|account|
      profile = {}
      profile[:site_title] = account.title
      profile[:profile_id] = account.profile_id ## this is the id required for requests to the API
      profile[:account_name] = account.account_name
      profile[:account_id] = account.account_id
      profile[:web_property_id] = account.web_property_id
      results << profile     }     return results   end   def connect_profile(profile)     @gs.profile_id = profile[:profile_id]   end   def connect_profile_id(profile_id)     @gs.profile_id = profile_id   end   def get_keywords(start_date = nil, end_date = Date.today)     results = []     csv_data = @gs.get({         :start_date => (start_date || (end_date - 365)).to_s,
        :end_date => end_date.to_s,
        :dimensions => ["medium", "keyword", "landingPagePath"],
        :metrics => "entrances",
        :sort => "-entrances",
        :page => true}).to_csv(:long)

    return FasterCSV.parse(csv_data, :headers => true)
  end

end

gs = ExtractKeywords.new(u, pw)
gs.connect_profile_id(ga_profile)

(start_date .. end_date).each { |date|
  file = "#{file_path}/medium_keyword_landingpage_visits_#{date.strftime('%Y-%m-%d')}.csv"
  next if File::exists?( file )

  FasterCSV.open(file, "w") do |csv|
    csv << ["medium", "keyword", "landingPagePath", "entrances"]
    keywords = gs.get_keywords(date, date)
    keywords.each{|row|
      csv << [row["medium"], row["keyword"], row["landingPagePath"], row["entrances"]]
    }

  end
}
Follow

Get every new post delivered to your Inbox.