What I learned this week about: Pardot Responsive Layouts

I built my first responsive email template in 2014 when I was just coming into the MOPs/Salesforce Admin portion of our programming and realized that my company’s marketing emails were NOT responsive.

Me being me, I ended up sitting through a free webinar put on by Litmus to gain the basic understanding of how responsive emails worked, and from there I was the go-to on the team for all things HTML and CSS. I fumbled my way through enough to ensure that our emails and custom landing pages would look good on mobile.

Side note: I did all of this because I had reviewed the open rates based on device and found that approximately 30-40% of our emails were being opened on mobile. That’s a pretty sizable chunk of people having to squint at tiny print on a small screen.

I am not an expert on this stuff at all, so I’m not about to sit here and break down how to do this – there are much better resources out there for that. All you need to really understand about this is that responsive emails are based on tables, as in:

<table>

<tr><td></td></tr>

</table>

That at least I understood having been big into building strangely elaborate personal webpages when in school. I wish I had screenshots of some of the work I did back then – it wasn’t terrible, all things considered.

For responsive emails these are important because you end up with nested tables – tables inside of table cells inside of tables. Tableception, if you will. (Is that joke still a thing? I use it a lot.)

And then on top of that, there are some special little tweaks you can make to the CSS itself to ensure that when the size of the screen shrinks, those tables all shift around into place, so instead of squished, you get stacked.

Screen Shot 2019-10-29 at 1.17.34 PM <– Like that.

So what does this have to do with Pardot??

In a few implementations clients have used one of the prebuilt responsive templates in Pardot and found that instead of stacking, their template just shrank down into a smaller version of the same layout.

For whatever reason this didn’t seem to happen in previews or even with all template layouts, but for this client it did, and I wanted to fix it. It took some digging. And by digging, I mean rewriting the code almost line-by-line to find the issue, but when I did find it, it seemed a little silly.

The key to that fancy table action above working is in the CSS that exists for that email, so before we even start adding our tables and rows and cells and tables inside of cells…we have our CSS classes defined. Think of those classes as references; later in the HTML tables, I can reference my CSS via the class name, and that is used to display the info according to that reference.

But what I found was a table referencing a class that wasn’t there. Simple mistake and simple solution – we just had to drop the appropriate class name (reference) in the CSS, and BOOM! We had a nice, stacked template.

So what happened??

¯\_(ツ)_/¯

It’s possible that the client made some small change during editing that removed that class. It’s possible that on that particular layout, the class just wasn’t included. I don’t know, but what I took away from that was to just check.

This is true no matter the platform. Any time you are using work or designs created by another source for mass consumption and reuse, just take a minute and review it. Become familiar with it. In a way, the HTML/CSS of your email templates is like a manual for a new gadget you’re putting together. It’s tedious to go through it, and wouldn’t we rather just slap the thing together and be done? Sure. But if you take that time at the beginning to introduce yourself, you’re more likely to find little hiccups. You know, before you start putting any real weight on the thing.

 

What I learned this week about Medieval dagger fighting

I’ll not get into why I needed to know this…just trust that it made sense at the time. And it wasn’t because I myself need to wield daggers.

Fine. It’s because this year I am actually planning ahead of NaNo. I can’t say why. It went ok last year, when I had a flimsy sort of outline. Maybe this year I have a storyboard. Maybe this year I started researching things before they come up in the story, and I lose precious writing hours to watching YouTube videos.

YouTube videos about dagger fighting.

And one thing that kept coming up is the Arte Athletica by Paulus Hector Mair and the manuscripts of Joachim Meyer.

Arte Athletica is a manuscript from 1545ish, written by a German fencing master (Mair), and it’s generally considered one of the most complete manuals available today of fighting styles from this time period. It’s technically made up of two codices, each building off of an earlier body of work and updated to fit those more modern times. And it has waaaay more than daggers; have nothing hand but a sickle? My man Paulus has you covered.

Joachim, on the other hand, made treatises that were compiled into Gründtliche Beschreibung der Kunst des Fechtens, or as I like to call it “how to royally destroy a dude’s day.” Where Paulus compiled what was there, Joachim decided to reinvent the wheel. Kind of.

Regardless, each of these sources provide thorough instructions, and in some cases, pictures, that have been used by SCA enthusiasts looking for that authentic Germanic medieval feel.

Daggers were not meant to be a primary weapon but used in conjunction with (or as a backup to) a sword. Ironically (perhaps) one of the only games I’ve played where a rogue in fact fights with a sword and a dagger, instead of two daggers, is Dragon Age: Origins. So kudos to BioWare…even more kudos to them. They brought me Mass Effect.

The bulk of blocking came from the concept of aiming at the wrist, but given its size, more often than not, a fighter would miss, and so the follow through movements of blocking over or under (too soon or too late) make up a good portion of the maneuvers that one would use.

To avoid slicing through your own arm during a fight, a common dagger of choice was a Rondel, a three-sided blade that was only sharp toward the tip, used for puncturing. As one video I watched pointed out (ah ha!), it was the ice pick of daggers.

I learned that the techniques for dagger fighting, as with any martial art, come down to basic principles, the same basic movements upon which one builds.

I also learned that in today’s world, it’s still primarily white dudes who seem to be worrying about this.

What I learned this week about Pardot Business Units

Okay, listen.

I started out with Marketo. I cut my teeth with it, learned what marketing automation was, re-acquired some HTML and CSS skills to make email templates better, and I yelled a lot about how important it was for us to be segmenting our content. So much power.

I say that because now I work with Pardot. Pretty much exclusively.

And it’s different. Different.

There are parts that I have really enjoyed learning, things that I think it maybe does better. There are things it doesn’t do as well. But I’m not talking about that now. Just sort of setting the stage here.

What I want to talk about today is Business Units. Because I’ll tell you…over the past few weeks I have learned a LOT. Almost exclusively through trial and error because the documentation is thin on the ground.

What’s a Business Unit?

I have two very distinct teams – a US-based sales group and a UK-based sales group. I don’t want them touching each other’s data, but historically when I bring them into Pardot, they get all blended together.

Business Units.

Marketo had a similar function back in the day that my at-the-time company considered, but it wasn’t necessary.

Business Units create two distinct databases within a single Pardot instance. Or more. I’m using two because it’s simple.

What are the prerequisites?

Hooo boy. Hold onto your pants for this one.

Pardot changed their connector in February (2019). In theory, if you purchased Pardot any time after that, you’d be using the Version 2 Pardot Connector – this is a prerequisite for business units. It’s also not entirely true? I definitely had clients onboard in April who didn’t have the Version 2 connector.

But that’s the first prerequisite.

Also you need to, you know, pay for them.

Finally, and I cannot stress this enough, you need to read, re-read, and re-re-read the documentation. Plan this out. Know your business units ahead of time:

  • What will the name be? You cannot change it in Setup after the fact.
  • Who will be the admin for that Business Unit? If you purchased BEFORE April 25th, you will be UNABLE TO SWITCH BETWEEN THEM.
  • Which users will be assigned to which Business Units? (see note above)
  • Which Contacts and/or Leads will be assigned to which business units? Like users, a Contact/Lead can only be assigned to ONE Business Unit.
  • How will you identify the appropriate Contacts/Leads for each Business Unit?

Go over that list of questions more than once. I promise if you think you have it in your head and are ready to go, it will not hurt you to 1) go over it one more time and 2) write it down.

Know what you have available

In one year alone we’ve had all of these changes to options. If at all possible, figure out ahead of time which version you have. If you have the earlier version of Business Units, again, you cannot switch between them. That means duplicate user records, if you intend to have users in more than one business unit.

Plan Ahead

If this theme hasn’t been made clear enough, it is so important that you plan through this ahead of time. If you encounter an issue, it could take weeks before it is resolved.

Ultimately the idea is a good one – we have multiple corporate entities that share a Salesforce instance, but their marketing efforts are different, and we need to keep them separate. Once upon a time you would have done this by potentially purchasing multiple Pardot instances and connecting to them to your shared Salesforce org, but with the way Pardot’s connector behavior is changing, that would no longer be possible.

Thus Business Units.

It’s a good idea, and with the most recent updates to the product, they are moving in the right direction. Just be diligent. And plan ahead.

What I learned this week: Providers & Self-Signed Certificates

I would say that about once a month I have a client or coworker sending me an email that looks like this and asking “what do I do?”

SelfSignedCert has expired
SFDC Expired Certification Notification

I remember getting my first one of these and panicking, and the documentation available for admins with little knowledge of single sign-on is poor. I am pretty sure that we have all found the answer via the Answers section of Salesforce’ Help, as opposed to actual documentation.

I have kept a link on hand to share for just this occasion (it’s here, in case you need it).

Fast forward a few years, and I’m studying security and identify more in-depth than I have in the past, and much like data skew, that involves learning the correct terms for what used to sound like jargon.

As the link above to Salesforce’s help article states, this Self-Signed certificate is most commonly used for Single Sign-On settings, but…what does that mean? As with anything else, stating the purpose or cause of something doesn’t always answer a person’s question. Many people much smarter than me have rightly pointed out that if you cannot explain a concept to a child, you do not truly understand that concept. And Salesforce’s Help Articles aren’t always great for that level of explanation.

So let’s start with the basics: Single Sign-On.

If you work for a company in an office, you may already experience this everyday. You log into your computer, and doing so logs you into other company services – an extranet, your inbox, etc. To varying degree, the idea is in the name – you sign in once to multiple platforms.

Ultimately this works because there are two entities working together to allow this to happen.

The Service Provider is the system you’re being logged into secondarily – let’s say JIRA. This is the platform that is requesting your login credentials. Normally this request looks like a login screen, but for single sign-on the whole point is that you bypass that screen. So instead of asking YOU, it asks the system you’re logging in through.

This initial system is the Identity Provider. It is helpfully passing along your credentials to the system that needs the information.

Salesforce, as you can imagine, can be both. And the self-signed certificate is sort of like your global permission slip. And like a permission slip it needs to be updated every once in a while.

“But I don’t have single sign-on enabled!” you cry.

Well sure, that makes sense. That means that Salesforce may not be a Service Provider in your org.

Have you installed any connected apps, though? Many connected apps walk you through a setup process that includes a handy UI that takes on the heavy lifting of setting up your API connection. During this process, some of those apps may create a certificate, which you’ll see by reviewing your connected apps link to that certificate. Sometimes these will take care of themselves – the third party companies you’re working with KNOW about this, and they plan accordingly, but at the least, you’ll know.

And if you’ve enabled Salesforce as an Identity Provider, even if you’re not using it that way…well, there you go.

Long story short: if you don’t remember setting this up, it’s very unlikely to cause issues, but it’s also very easy to update. Bookmark that link, and next year when you get that email, you’ll be ready.

 

What I learned this month: Adopting owned pets

In February we had crazy cold weather here in Michigan – not as bad as some places, but cold enough that when I looked outside one morning and saw a cat wandering through the snow, I knew I had to put something out for it. We found an old cat carrier downstairs, put some old towels in it, and put it out on the front porch near the garage access door, to keep it out of the wind. We put out some old cat food that our picky eaters wouldn’t touch anymore.

The next day the food was gone, so we replenished it.

For the month of February we had about five or six neighborhood cats come and go regularly. We didn’t always see them. Sometimes it was just a mass of paw prints in the snow around the food bowl that was now miraculously empty. We named all of the cats, but our most common visitors were:

  • Tux – a lifelong neighborhood cat, the roughest guy on the block
  • Shadow – a small, polydactyl black cat
  • Mandarin – a small orange tabby, to whom we assigned Most Likely to be Trapped Twice With Food
  • Flerken – a tiny (seriously tiny) gray tabby, who got very pregnant at some point and disappeared for a month or so

This continued into March, as the cold clung to the area. By April, we were down to two regular visitors and one permanent tenant.

We had long suspected that Shadow had been, at one point, indoors. She was quick to trust us, liked to be around us, and seemed generally less adapted to being outdoors. By May, she was happily playing with us on the porch, rubbing our legs, letting us pet her.

I was quickly infatuated. I mean…a tiny black cat. Polydactyl. I never stood a chance.

As summer continued, we sometimes saw Tux, but ultimately Shadow was the only one left, and she made it clear she had adopted us. She lived on our porch. She had regular feeding times. I wanted to bring her inside, and the long process started in late June.

For those of you uninitiated in the cat world – cats are NOT easy to integrate with an existing colony of cats. While we only had two, they were still basically a colony. And that’s the least of potential issues.

FIV, Feline Immunodeficiency Virus, is one major concern. Most commonly spread via bites from infected cats, it’s similar to HIV. Cats infected with FIV can live normal lives, so long as they avoid infections, especially from major concern #2 – Feline Leukemia. Shadow herself does not fit the bill of a common carrier; because FIV is most commonly passed via a bite, outdoor males are the must susceptible. But she was outside with males, and it was certainly possible that she would have gotten it.

Concern #2, Feline Leukemia (FeLV) is also transmitted via bites, but it can ALSO be transmitted via normal behaviors, like mutual grooming.

House cats are usually vaccinated against these viruses, and they are at less risk most of the time, being kept inside with other cats that have been vaccinated.

But before bringing Shadow inside, we needed to be sure. We had gained her trust enough for me to pick her up, and on July 1, I was able to put her in a carrier and take her to the vet.

We had a lot to check on, so I wasn’t too surprised when they whisked her away to the back and 10 minutes rolled by. 15. 20. At which point the vet returned to tell me that they had found a microchip and were tracking down her owner.

It had always been a possibility, of course.

What I had not considered was that the owner would be found and would agree that, since we had been caring for her for the past 6 months, she was likely better off staying with us. So on July 1, I came home with a new cat.

Test results started coming in.

  • No FIV
  • No FeLV
  • We started her on a dewormer, and by the time we were able to get a sample to the vet, she was free of those, too

And I went through the process of transferring her microchip data to us. That was an exercise, but it was much easier than I thought it would be.

So now here we are, outnumbered and loving it.

I have always believed that we are chosen by our furry friends and not the other way around, and I think this past month has simply proven that.

 

 

What I learned this week: Airport Runway Capacity

Over the past year, I have flown to and from New York 7 times. That doesn’t seem like a very large number unless, like me, you prefer the comforts of home and Electric Hero subs from a few blocks away.

Being in Grand Rapids, my direct flight options are a little bit limited. Specifically I can go to Newark, or I can go to LaGuardia. Or I can do a multi-leg journey to JFK. Since interviewing at Arkus, I’ve chosen LGA every time except one time going to Newark and questioning my life choices the entire time.

LaGuardia has been undergoing MASSIVE reconstruction since I started flying out there in 2016, and it has made traveling through the place a greater headache each time. If the standard traffic weren’t enough, you now have to compete against road closures, construction zones, and entire areas of the airport being suddenly inaccessible after they were there two months ago. Keeps me on my toes, that’s for sure.

On my last visit, I couldn’t help but wonder, sitting at a standstill in a line of cars, waiting to exit the airport grounds, and looking at brightly colored signs happily declaring that “a better LaGuardia is coming!” just how long this could possible go on. What sort of purgatory are collectively experiencing? So I Googled it, and apparently I’m not the first one to do this, since the suggestion was immediate.

2022. By the way. 

The part that intrigued me…that’s not fair. It was actually fascinating. The original airport was built in the 1920s, which blew my mind because…did Queens need an airport then? Apparently. The next terminal was built in the 60s, then then 80s, and finally the 90s, and so they ended up with this Tetris kind of place. Not the point.

The part that REALLY piqued my interest was a line toward the end that they are going to add 2 miles of runway, which will help increase the airport’s capacity and decrease some of the issues they have with delays. (Did I mention that I read this while my flight was delayed by over an hour? Yeah. So at least I could understand the root cause.)

What does that have to do with anything, though? How would two miles really have an impact?

As it turns out, this is a Thing. Like an FAA thing. They produce semi-regular Airport Capacity Profiles (last updated in 2014) that determine, based on things like runway space and layout, just how many flights any given airport has actual capacity for. Specifically these reports identify the maximum capacity within a single hour of operation. These overall capacity reports are then broken down by things like weather conditions (visual, marginal, and instrument), realistic operational conditions, and even external factors that may have improved capacity since the last overview.

And you bet they have one for LaGuardia. I read it. But it didn’t quite explain how the two miles of runway would improve performance, so I had to keep looking.

Did you know StackExchange has a whole Aviation subdomain?

LaGuardia currently operates 22 arrival runways and 13 departure runways. Adding two miles of space to increase the number could have a positive effect on the capacity of the airport, but adding runways alone does not solve the problem. For instance, depending on the layout of the runways – parallel or perpendicular – you may have better capacity when the weather is cooperating (parallel) or more options and better sustained capacity when weather is less than ideal (perpendicular).

The mix of aircraft sizes could have an impact. If a very large, heavy aircraft lands, it produces more wake turbulence than a smaller craft, so having a larger variety could mean smaller planes have to wait longer.

The sequencing of arrivals and departures – how many planes are arriving vs. leaving? Will we have room for them? Better get that right.

Sequencing across airports – LaGuardia is in what’s considered the NY/NY/PHL airspace, which supports flights to LGA, JFK, EWR, and PHL. And as it turns out, big freaking flying machines need room to maneuver, so it’s not just the flights into and out of LaGuardia that need to be considered.

Runway exits. Wind strength in the area. Noise constraints. Lateral separation. So. Many. Things.

By the time I read through the capacity report, learned from the experts on Stack Exchange, and took a moment to consider all of the other things going on around a tarmac, I realized two things.

  1. It is very unlikely that adding two miles to the runways at LaGuardia will have THAT big of an impact.
  2. It is kind of a miracle that we ever get anywhere when it comes to flying, so maybe be nicer to the folks at the desk.

What I learned this week: Data Skew

Disclaimer: In the spirit of full transparency, I learned about data skew a little while ago. But the whole point is “what I learned this week.” In some cases, “this week,” just refers to this week in time…like…last week, last month, whatever.

My first brush with NPSP was as a consultant. I remember very clearly thinking that some of the features would have been very handy for my B2B sales staff back in the day. In a lot of ways it was love at first sight. I still get prickly when people say mean things about it…

[Insert about a half hour of me looking for the best option for a “Don’t talk to me or my son ever again” meme before realizing there could potentially be a better use for my time.]

That said, the first time I started getting error emails at about 2am was ALSO around this time.

You know, this one:

Message: “First error: Update failed. First exception on row 0 with id 001……………; first error: UNABLE_TO_LOCK_ROW, unable to obtain exclusive access to this record or 1 records: 001…………….: []”

And I was flummoxed. What does that even mean? Why are you locking anything? Who said that you needed exclusive rights? And what does this have to do with merging records?

For a while I sort of…ignored it. Honestly it would run again at some point, right? It rarely happened more than once for the same record.

Sometimes I would have dozens of them. Usually right after some major data change or something. I suspected they were related, but I had other pressing concerns, and eventually everything would be sorted.

Over time I filled in the blanks. Unable to lock row meant that whatever the code was trying to do, it couldn’t get update access to the record.

If I spent more than 30 seconds on it, it made sense. A record cannot be edited by more than one person at a time, so why would it make an exception (ha ha – get it?) for custom code.

And then again, for a while, I left it at that.

Enter Data Architect Trailmix, stage right.

A super important part of the large data volume considerations that are discussed in the data architect arena is the concept of data skew. And as I read about it, I was taken back to a project early on, a move from the Starter Pack and a bucket model to NPSP with Household Accounts.

This client was looking to upgrade to the new success pack. They had been using the bucket model for YEARS – more than 50,000 contacts all inelegantly shoved into this single Account called “Individual.”

It was difficult to report on things. It took forever for the record to load.

I knew that there was a correlation, but I could not, especially at that time, explain what it was. I had a sense that having to many child records was a bad thing. I didn’t know what to call it. And I wouldn’t know, until years later, that that very situation was what caused errors during the overnight batch processing.

Data skew occurs when we have too many child records, plain and simple. It has an impact on loading time (you try showing a record and querying tens of thousands of records at one time), reporting, and…yes, automation.

It doesn’t exactly help me fix the errors all the time. Sometimes it’s just bad timing, and not even because of data skew. But putting a name to something makes it more accessible, less concerning.

Carry on, NPSP. Carry on.