Experimentation (A/B testing) in Edge Delivery Services: From Hypothesis to Insights
In this talk, we will explore the fundamentals of privacy-first Experimentation (A/B testing) in Edge Delivery Services without compromising the webpage performance. We'll start by discussing key terminology like variants (control group and challengers) and the concept of splits. Then, we'll demonstrate how to easily set up experiments in edge delivery environments using document-based authoring.
In the second part, we'll cover topics such as the process of hypothesizing for experimentation, identifying experimentation candidates, common pitfalls encountered during A/B tests.
Ekrem
For the content experiments, it is straightforward. By convention, we keep the variants under /experiments/{exp-name} folder; which is pretty much an archive. For the code experiments, it is not that straightforward. You can use the git history for that purpose.
Ekrem
Those tools serve different purposes. We use the contextual experimentation (the plugin we presented) to validate hypotheses based on the context (RUM) quickly. You don’t need to get the consent from your users to run experiments. So that you don’t need to wait to load the content until user gives the consent. However, target is a full blown ab testing (and more) tool which allows you to experiment with different scenarios.
Oleksandr Tarasenko
Adobe Target still offers way more than Experimentation Plugin in EDS. Is there a way to get the best out of both - power of AT and performance of Experimentation Plugin?
Lars
You can use both see here https://www.aem.live/developer/target-integration
Oleksandr Tarasenko
Lars, thanks. However, mentioned article is about AT in EDS. I am more curious about the synergy of EDS Experimentation Plugin and AT
Francisco
Giving access to the RUM data is still an ongoing discussion. At the moment the only way to get access to the data is through the mentioned Co-innovation program. If you are interested contact us in discord or in slack if you have a channel with Adobe. Francisco
Beo
Whats the licence behind? Can I just include it to my page?
Ekrem
Michael
What is the size in kB of all the additional JS/CSS injected?
Ekrem
The default flavor we demoed is 9.5kB
puradawid
Can experiments details be hidden so the end-user is not able to scrap, or reverse-engineer, currently running experiments?
Ekrem
The experimentation pill is only visible on preview environment
Michael
Does this work with dynamic rendering (e.g. React)?
Lars
Framework like React etc. is not recommended https://www.aem.live/docs/dev-collab-and-good-practices
Michael
:(
Beo
Does this work with AEM as backend as well?
Beo
Just to be clear.. this solution is based on code? It takes a developer to set up, right?
Ekrem
It only takes the initial set up for the developer. Then authors can run their own content experiments on their own. For the UX experiments, however, they would need developer help
Tad
How do you configure how a successful challenger is selected? Is it just interaction? Click-through?
Francisco
We have tools to analyze the data, which at the moment are only accessible via the co-innovation program. In these tools you can check the events that happened in the page where the experiment run, see which variation was displayed, and decide which event or combination of events is considered a conversion/success, based on your use case
Tomek Niedzwiedz
How does one track conversions in this approach?
Ekrem
In my personal experience, the definition of the “conversion “ changes from website to website. By default, RUM collects the clicks along with other data such as block views, media views etc. We generally optimize for the engagement than actual conversions. The reasoning behind that is as follows: when you think about the conversion funnel, there is a flow from all visitors > engaged visitors > converted visitors. The ratio between engaged/converted are the same between the variants since they always target the specific portion of the visitors. Hence, if a variant wins over the control on engagement; then it wins over the conversions as well
Francisco
We have tools to analyze the data, which at the moment are only accessible via the co-innovation program. In these tools you can check the events that happened in the page where the experiment run, see which variation was displayed, and decide which event or combination of events is considered a conversion/success, based on your use case
Francisco
When you are done with the experiment you should remove the experimentation metadata properties on the page. Otherwise the experiment will keep running. Regarding the challenger documents, it is not mandatory to remove them, it is up to you
Jurgen Brouwer - AmeXio
Does EDS impact some basic principles like (session) stitching for examle?
Francisco
I assume the question is about data tracking. In RUM data there's no concept of session. Each impression is an individual record and only past information is the referrer url. Data is also sampled, by default 1 out of 100 page views is tracked. This is the way to guarantee regulation compliance and consent free.
Bryan
Is auto-allocation possible?
Ekrem
The flavor we demoed splits the traffic between variants equally. You can use the MAB (multi arm bandit) flavor to achieve that.
Francisco
The way Edge Delivery works is delivering a very simple "semantic" html that is decorated in front end. That makes the delivery of the content super fast. In AEM 6.5 usually the decoration is done in backend returning a rich html. You could potentially add your js logic to run your experiment at the beginning of the page. It's important that this js is light and optimized