Loading…
CAST 2019 has ended

Sign up or log in to bookmark your favorites and sync them to your phone or calendar.

Session [clear filter]
Wednesday, August 14
 

9:00am EDT

CAST Welcome
Wednesday August 14, 2019 9:00am - 9:15am EDT
Sea Oats

10:45am EDT

Being a Tester After Trying Almost Everything Else
Ten years ago, l was a tester. Until I became so disgruntled with my work that I didn't want to do it anymore. Ever again. So, I started doing a lot of other things instead. I was a developer, a support engineer, I led an Ops team, and even worked In Marketing! And finally, I returned to the quality world.

Funnily enough, working on so many different roles allowed me to become a better engineer and tester. So I'd like to share with you how trying out new roles made me develop new skills that were also useful for my job as a tester. It also improved the way I interacted with different people and made me look at things differently. For example, do you know what's usually the most effective way to diagnose a bug submitted by a customer? Or can you guess what I struggled with the most as an engineer finding his place in a marketing department? I'll answer these questions and more!

Takeaways

  • Understand how context can make someone disgruntled with testing and not the Job itself.
  • Trying out new roles can make you a better professional in ways you never imagined.
  • Knowing how a developer, customer support professional or others think allows you to better interact with people in these roles.


Speakers
avatar for João Proença

João Proença

Quality Owner in R&D for OutSystems, OutSystems
João lives in Lisbon, Portugal, and is a Quality Owner in R&D for OutSystems, a company that provides one of the leading low-code development platforms in the world. He has had various roles throughout his career in the past 11 years, including quality assurance, development, customer... Read More →


Wednesday August 14, 2019 10:45am - 11:45am EDT
Sawgrass

10:45am EDT

It's Not Rocket Science, It's Far Trickier
Popular culture leads us to believe that rocket science and brain surgery are the most complicated things out there. "It's not rocket science" is uttered when the business describe the new feature they want, or engineers explain the technical architecture or marketing want details of new features for the website announcement. This ex-rocket scientist thinks that most of what we do is far more complicated that rocketry (though admits she's completely unqualified to comment on the brain surgery comparison) and has been known to retort "rocket science is ok, it's recovery that's tricky".

In a 40 minute talk we will cover a number of known live rocketry failures due to undiscovered bugs, major test failures during construction, as well as missions enhanced by successful collaboration. These examples will be linked back to practices that the testing industry may replicate or may rail against, for instance full end-to-end regression tests after integration of individual components. The aim is for attendees to learn that taking inspiration from other industries can enhance their craft and increase the diversity of their test ideas.


Speakers
avatar for Nicola Sedgwick

Nicola Sedgwick

Mindful Leader, Coach & Team Glue, Culturli
An avid enthusiast of agile ways of working, Nicola loves the way technology can enhance and transform the world around us. Nicola is often found working with a product and coaching focus to ensure agile teams collaborate between themselves, and with stakeholders, in order to eliminate... Read More →


Wednesday August 14, 2019 10:45am - 11:45am EDT
Dunes 1/2

10:45am EDT

My Love Affair With Testing (and you can have one too!)
Have you ever wondered how you can tie your life interests into your work? Do you feel like your many interests fit together and inform one another, but struggle to articulate how? Do you love to learn, and want to understand how your love for learning can joyously infect your entire life, including your testing? Jess will share pedagogies of learning (Zone of Proximal Development, Scaffolding, Metacognition, and Flow), and then do a live demonstration weaving testing, music, and other disciplines together in a fascinating talk where she lays her skills on the line to show you how you can tap into your passion for anything and turn it toward testing.

Speakers
avatar for Jess Ingrasselino, Ed.D

Jess Ingrasselino, Ed.D

Director of Quality Assurance at Salesforce.org (USA)
Dr. Jess Ingrassellino is the Director of Quality Engineering at Salesforce.org, where she developed and implemented the testing philosophy. She is active in the education community as a member of the Industry Advisory Board for CUNY Techworks, and teaches Python and testing to participants... Read More →


Wednesday August 14, 2019 10:45am - 11:45am EDT
Sea Oats

2:00pm EDT

Creating Test Stability to Create Continuous Delivery
Our agile teams struggled to continuously merge local code changes to the master repository. In our company, our automated test were taking 30 minutes of execution time and the occurrence of flaky tests was just multiplying this time and reducing the confidence in the results. This slowed our agile teams down tremendously to the point where they couldn’t deliver continuously with automated pipelines. Ensuring high quality for each iteration became a challenge for our agile teams as they had to wait for my analysis of the failing tests. We had created automated checks to make our testing more efficient, but instead they were slowing us down and causing bottlenecks - I was a bottleneck.

Our automated tests were maintained by various team members and some of them were not following leading practices - which led to flaky results due to such things as - concurrency relying on non-deterministic behaviour, caching, dynamic content and many other factors. I needed to find out the root causes. I started my investigation and found the common issues were because of environment, locators, coding practices and a lack of knowledge sharing and code reviews. I improved our locators, coding practices, debugging and simultaneously the developers helped us to fix all environment issue and slow page load times, it was a long journey but it was definitely worth it.

Currently our automated test suites takes just 5 minutes to run in our CI environment - a huge improvement! Now my team members are more encouraged to see why a test is failing because the majority of failures now are genuine, and are providing valuable information. As a team, we get to spend more time testing and creating new automated tests, which has resulted in finding more bugs. We’ve also put things in places to avoid ending up in a flaky nightmare again, such as a Wiki Page of leading practices.I’m no longer a bottleneck, the team works together to maintain fast valuable automated tests, and I want to share the insights from that journey with you.

After this session, you’ll learn techniques to:
1. Treat test code as production code and test environments as production environments
2. Improve the stability of your tests which in turn improves your continuous delivery pipeline
3.  Make tests faster to enable fast feedback


Speakers
avatar for Trisha Chetani

Trisha Chetani

Software Tester
I am an automation enthusiast. I have enjoyed being a software tester, helping the team to follow testing processes and support that enable teams to deliver high-quality software in DevOps Environment. I'm always enthusiastic to attend Conference meetups for professional develop... Read More →


Wednesday August 14, 2019 2:00pm - 3:00pm EDT
Dunes 1/2

2:00pm EDT

Quality is a Team Responsibility
We are told that when quality is owned collectively, as opposed to being gated by a tester it can improve overall quality. Teams begin to contribute to quality through testability, identifying design flaws in stories, and testing earlier in the delivery process.

But this only scratches the surface of what can be achieved.

If we think of software delivery as a journey to achieving desired business outcomes, quality is the GPS that shows us where we are and how far we have to go.

This talk looks at some ways to better understand quality, what it means to you, your team and business partners. It looks at ways to frame quality in terms of business outcomes to help us keep on the right track. It will show ways to visualise quality so everyone can see where we are on our journey.

Do I believe testers have a place in this future? Absolutely! In fact, I think the role of a tester is needed more now than ever but perhaps not in a way we have traditionally seen our roles.

A must for anyone moving to contemporary engineering approaches.

Speakers
avatar for Anne-Marie Charrett

Anne-Marie Charrett

Test Consultant, Testing Times
Anne-Marie Charrett is a testing coach and trainer with a passion for helping testers discover their testing strengths and become the testers they aspire to be. Anne-Marie offers free IM Coaching to testers and developers on Skype (id charretts) and is is working on a book with James... Read More →


Wednesday August 14, 2019 2:00pm - 3:00pm EDT
Sea Oats

3:15pm EDT

5 Levels of API Test Automation
In my context we run a micro service architecture with a number (300+) of api endpoints both synchronous and asynchronous. Testing these in a shared environment with cross dependencies is both challenging and very neccessary to make sure this distributed monolith operates correctly. Traditionly we would test by invoked an endpoint with the relevant query params or payload and then assert the response code or body for valid data / type definitions. This proved to be more and challenging as the push for CI and having common data sources meant dependencies would go up and down per deployment which meant flakey tests.
I will demonstrate how we leveraged of newer technologies and split our api testing into 5 levels to increase our overall confidence. The levels are: (ignoring developer focused unit and unit integration tests)
  1. Mocked black box testing - where you start up an api (docker image) identical version to the one that would go to PROD but mock out all its surrounding dependencies. This gives you freedom for any known data permutations and one can simulate network or failure states of those dependencies.
  2. Temp namespaced api in your ci environment - here you start up ur api as it would in a normal integrated env but it’s in a temp space that can be completed destroyed if tests fail… never gets to the deploy stage and no need to roll back if errors occur, use kubernetes and ci config to orchestrate these tests. The tests focus is to check 80-20 functionality and confirm that the api will meet all the acceptance criteria.
  3. Post deployment tests - usually called smoke testing to verify that an api is up and critical functionality is working in a fully integrated environment.
We should be happy by now right? Fairly happy that api does what it says on the box… but…
  1. Environment stability tests - tests tha run every few min in an integrated env and makes sure all services are highly available given all the deployments that have occurred. Use gitlab to control the scheduling.
  2. Data explorer tests - these are tests that run periodically but use some randomisation to either generate or extract random data with which to invoke the api with. These sort of tests are crucial for finding those edge cases that are usually missed. Sometimes of low occurrence but generally high risk issues. I wrote a custom data extractor that runs against our DBs to find strange data sets to use as tests data.
I would like to elaborate and demonstrate these layers and execution and how this has changes the way we test and look at APIs. Would also touch on the tooling we use to achieve this and the pros/cons of using this approach.

Speakers
avatar for Shekhar Ramphal

Shekhar Ramphal

Quality assurance technical lead, Allan Gray
Shekhar is passionate about software testing, Computer engineer by qualification. He has experience in full stack testing in all areas from manual QA, system design and architecture, to Performance and security as well as automation in different languages.


Wednesday August 14, 2019 3:15pm - 4:15pm EDT
Sea Oats

3:15pm EDT

Making the Grade - Testing Undergraduate Software Engineers
As an Adjunct Professor of Software Engineering at McGill University, Robert Sabourin does a lot of grading and scoring of students using a variety of academic testing processes. Over the past twenty years Robert has evolved many different mechanisms to test students extending the notion of open book to open university and beyond. Rob’s students are generally expert programmers with strong math and engineering skills but with little knowledge of the process, methods and practices of professional software engineering. Rob will share his experience with several entertaining stories. He will walk through some rubrics and grading schemes. Rob will explain “testing” conundrums he has faced in the continuous battle between a bunch of intelligent students trying to do the minimal amount of work to get the highest possible grade. Rob hopes to inspire an interesting open season discussion to see if there are lessons, we can draw from grading undergraduates. Rob promises an energetic fun filled session about his adventures warding off the paper chase and focusing students on learning the right things well.

Speakers
avatar for Robert Sabourin, P. Eng.

Robert Sabourin, P. Eng.

President, AmiBug.Com, Inc.
Robert Sabourin has more than thirty-five years of management experience, leading teams of software development professionals. A well-respected member of the software engineering community, Robert has managed, trained, mentored and coached thousands of top professionals in the field... Read More →


Wednesday August 14, 2019 3:15pm - 4:15pm EDT
Dunes 1/2

4:30pm EDT

Building Automation Engineers From Scratch
Creating automation engineers from manual testers is hard. Even if your testers are willing, they have a lot of hurdles to get over to feel like the same kind of subject matter experts. I’ve been there done that, currently going through that and have advice to give for testers and their management.

Creating automation engineers from manual testers is hard. Even if your testers are willing, they have a lot of hurdles to get over to feel like the same kind of subject matter experts in automation as they are in manual testing.

As a career-long manual tester making the leap to automation, Jenny Bramble has experience to explain frustrations and provide solutions. She will discuss managing the expectations of your testers and their managers (what’s the time frame? Why isn’t this working?), techniques for teaching (such as games! Pair/mob programming! Software fundamentals!), and how to know when your testers have made it (what should manual testers be aiming for when they start?). You’ll walk away from the talk feeling empowered to create a plan to build your automation engineers from scratch!

You’ll walk away from this talk with a powerful new set of tools in your toolbox:
  • The basic framework your manual testers need to be successful—including how to determine where the gaps in knowledge are and filling them.
  • Advice on managing the expectations of your testers and management from time constraints to what success looks like.
  • Several methods for teaching framed around a case study of a team that built itself up from the inside out and is running a successful automation suite.
  • Facing and overcoming other challenges such as ability and perceived ability, resources, time, tooling, and how to get your team excited for a new chapter in their professional development.



Speakers
avatar for Jenny Bramble

Jenny Bramble

WillowTree
Jenny came up through support and devops, cutting her teeth on that interesting role that acts as the ‘translator’ between customer requests from support and the development team before diving headlong into her career as a tester. Her love of support and the human side of problems... Read More →


Wednesday August 14, 2019 4:30pm - 5:30pm EDT
Sea Oats

4:30pm EDT

Coaching Your Team to Test
As the sole tester on a team that’s moving towards continuous delivery and building a DevOps culture, how can your team release frequently, and with confidence?
Within my Agile team, the testing activity had become the bottleneck. The testing ‘To Do’ cards on the team’s Kanban board were piling up. As the sole test specialist within my team I felt as if I was preventing us from being able to deploy code to our live environment. Frustrated, we got together as a team to discuss how we could fix this problem.
Our solution? We decided to share my exploratory and automated testing knowledge with the team. My role as the test specialist evolved into a coaching role, which made me feel both excited and nervous. We found ways to test throughout the development process. We learned to design test plans and discuss technical challenges together. We collaborated on the testing effort.
In this talk I’m going to share how my team removed the testing bottleneck, built in quality to our product and started to become true cross-functional team members by increasing collaboration. If you face similar challenges with your own team, you can try similar experiments.

Speakers
avatar for Ali Hill

Ali Hill

Continuous Delivery Consultant, ECS Digital
Ali is a Continuous Delivery Consultant currently working for digital transformation consultancy ECS. He began his career as a software tester and went on to spend five years in various software testing roles. Passionate about quality, he has always focused on how to improve his teams... Read More →


Wednesday August 14, 2019 4:30pm - 5:30pm EDT
Dunes 1/2
 
Thursday, August 15
 

9:00am EDT

Opening
Thursday August 15, 2019 9:00am - 9:15am EDT
Sea Oats

10:30am EDT

Hello World – How I started in AI/ML and how you can too!!!
Artificial Intelligence and Machine Learning are making inroads into every conversation. Many people think that AI/ML is a new kid on the block. Surprisingly, it is a blast from the past which seems to be advancing at a rapid pace and will impact our lives significantly.

When I heard about its resurgence, I was curious to learn more about it. I had constraints: close-to-zero AI/ML experience, no data-scientist skills, and little infrastructure at my disposal. I wanted to dip my toes in AI/ML gently and practically, without getting overwhelmed or bored to death by reading obscure and dry books on this subject. I wanted to learn by doing rather than reading.

If you’re in a similar situation with regards to getting started in AI/ML, this session will provide you with necessary information and practical guidance. I’ll share my experience on how I gently jump-started my journey into AI/ML using Azure ML Studio and how you can too. Then we’ll build/tweak/demo an AI/ML example and create a callable API endpoint for us to test and explore.

Furthermore, I’ll point you to resources, tips and tricks, which I’ve found valuable. In the end, you will learn AI/ML fundamentals and how to jump-start your AI/ML journey practically.



Speakers
avatar for Umang Nahata

Umang Nahata

Senior Systems Test Engineer, Progressive
Umang Nahata is passionate about sharing his knowledge with others. He loves code-jams, testing, test automation, and building better software. He has been in the IT industry for more than a decade in a variety of roles and experiences. He is currently a Senior Systems Test Engineer... Read More →



Thursday August 15, 2019 10:30am - 11:30am EDT
Sawgrass

10:30am EDT

Test Ideation: What Writing Taught Me About Testing
Does the creative mind belong in testing? Can drills from writing add anything to our craft as testers? If so, where do they fit in?

Long before my testing journey began, and shortly before studying computer science I was 6 credits from a degree in creative writing. Now, I use these skills daily.

What if testing is both an analytical activity AND a creative one? After meeting and working with thousands of testers and test automation engineers, Paul Merrill has noticed a common struggle for testers is creating tests. With numerous teams, Paul has found several of the drills from creative writing programs can help with directing the mind in testing.

Join Paul for this mini-workshop. Bring a readiness to scribble at a whiteboard or privately in your notes, and an open mind. Learn from those around you and be ready to share your learnings as they come in this fun set of exercises you can take back you your team next week for better testing!

Speakers
avatar for Paul Merrill

Paul Merrill

Principal Software Engineer in Test, Beaufort Fairmont
Paul Merrill is Principle Software Engineer in Test and Founder of Beaufort Fairmont Automated Testing Services. Nearly two decades into his career spanning roles such as software engineer, tester, manager, consultant and project manager, his views on testing are unique. Paul works... Read More →


Thursday August 15, 2019 10:30am - 11:30am EDT
Sea Oats

10:30am EDT

Testing Satellites
Systems designed to be operated in space have a few unique problems to must be addressed.
First, they cost a large amount of money. A “black box”, or in other words, a unit that performs a specific
task and is part of a larger system, can range up to several million dollars for one. Considering that
redundancy is used to increase the reliability of space systems, these costs can easily double. A small
communications satellite, which includes many units, runs into the tens of millions of dollars. A more
complicated satellite, such a radar imaging satellite can cost a few hundred million dollars. A constellation
of satellites, such as those proposed for worldwide communications are in the range of a few billion
dollars. This is just the cost of the hardware that is in space. Hidden costs include development and
qualification, launch costs, and operation costs. Worse yet, you cannot go up there and fix it if things
break.
These reasons alone is enough to justify a rigorous design, build and test philosophy. This
lecture will expand on these ideas, but concentrate on the testing aspects. Verification and testing is
different for manned systems compared to unmanned systems. Human life is precious, so manned
systems undergo more stringent testing. This lecture will concentrate on unmanned systems, and how
they are verified and tested. Specifying how one will verify and test starts from the top down.
However, during the build process, testing is done from the bottom up. What is being tested and
verified varies with each level of integration. These differences will be explained. Specific examples of
testing and verification at the different levels will be shown.

Speakers
avatar for Vladimir Glavac

Vladimir Glavac

Mr. Glavac graduated from McGill University in 1983 in Electrical Engineering. He started off designing hardware for radar signal processing for commercial radar systems. He migrated to the design (HW and SW) of real‐time embedded systems for aircraft avionics displays. He spent... Read More →


Thursday August 15, 2019 10:30am - 11:30am EDT
Dunes 1/2

12:45pm EDT

"Git hook[ed]” on images & up your documentation game
Can you remember the difference between two hex color values? Me neither! 
Entering visual representations of recently-changed elements into version control makes review of past changes easier & speeds acclimation to a new web project, especially for visual learners. Surprisingly, methods for including images in your version control aren't standardized and are rarely used outside of large companies, and the rest of us are left checking out every major commit and viewing changes locally! Join me for a review of methods currently in use and discuss the benefits and drawbacks of each. The audience will learn from a survey of tools used by both designers and web developers, what methods are most appropriate for individual projects, & how these methods differ from those used at some of the largest companies (Google, eBay, etc.). Finding a method to track changes in your visual elements will save our future contributors (and future selves!) the pain of having to distinguish #2dc651 (lime green) from #34a34e (darker(!) lime green) and ultimately make our commit histories cleaner and our repos easier to navigate in ways that many of us have never imagined!

Speakers
avatar for Veronica Hanus

Veronica Hanus

Before becoming a programmer, Veronica was a researcher with an eye for process improvement (she helped pick the Mars Curiosity Rover’s landing site!). While teaching herself web development, she’s brought a researcher’s perspective from her time at NASA-JPL & MIT into whatever... Read More →


Thursday August 15, 2019 12:45pm - 1:45pm EDT
Sea Oats

12:45pm EDT

Lightning Talks
Thursday August 15, 2019 12:45pm - 1:45pm EDT
Sawgrass

12:45pm EDT

The Origins of Context Driven Testing Abstract
How did Context Driven Testing (CDT) and AST come about? Was it one or two people, or a committee effort? Who were the principal people who articulated the ideas? The answers provide an interesting perspective on how CDT evolved. Doug does not claim to be the father of CDT, but was an active participant and observer who recorded many of the activities that led to its creation. The session is a retrospective of the people and experiences that brought us here. 

Speakers
avatar for Doug Hoffman

Doug Hoffman

Doug has been active in quality assurance and software testing for over 45 years. He is now an independent management consultant in software quality assurance and testing and he teaches classes on software testing, test automation design, and test oracles. He was a participant in... Read More →


Thursday August 15, 2019 12:45pm - 1:45pm EDT
Dunes 1/2

2:00pm EDT

Why is There a Marble in Your Nose
The first time I asked a student “Why is there a marble in your nose?”, it was a learning experience for me. That initial “why” led me down a path of questions that I hadn’t known I needed to ask - and it revealed that the real challenge wasn’t actually the stuck marble at all! The real challenge I needed to solve was that the student had outgrown naps and needed something to do while his classmates were sleeping.

Software testers come from a wide variety of backgrounds, which often influences how we carry out our role as testers. For instance, the skills involved in helping children transition from playtime to naptime enabled me to lead a rollout of process and workflow changes in our engineering department. My experience teaching math to 5th-grade students with an “I do; we do; you do” approach translates pretty directly to helping engineers learn a new testing framework. And of course, I learned the importance of asking “why”, which has made me a better advocate for users and engineering teams. Why did we make that design choice? Why should we iterate this process? Why isn’t this tool working out for us?

Every experience matters. We all bring a variety of expertise and lessons learned from previous roles or industries - and this is a good thing! It allows us to think outside the box and challenge the status quo; to include perspectives that would otherwise be lost or overlooked. Having a wider breadth of experience and knowledge makes us better testers, and I hope that my talk inspires you to think about how your prior jobs have impacted and improved the way you work in software testing as well.

Speakers
avatar for Angela Riggs

Angela Riggs

QA Manager, Instrument
I'm currently at Instrument as their first QA Manager, working to create Quality Engineering as an internal discipline! This means I get to solve people problems and nerd out on process improvements 😄I believe that empathy and curiosity are driving forces of quality, and enjoy... Read More →


Thursday August 15, 2019 2:00pm - 3:00pm EDT
Sea Oats

3:15pm EDT

Building Deep Thinking Tools for Exploratory Testers
I have been a functional exploratory tester. I was motivated to move out of exploratory testing and become that cool kid doing automation. Thankfully, someone pulled me aside and told me - I am more suited to be a functional exploratory tester and that I am business savvy. I didn't shy away from code. I worked closely with developers and my testing approach involved reading code (didn't write any) and brought in value to developers and to the business. I grew up as a tester being coached by experts, reading blogs from experts and learning that automation will help exploratory testers do more.

In my wait - I found very little effort that has gone in direction. I partnered with developers to be building some products.

I failed multiple times and here are my failed attempts
  1. Tool for Social Media Driven Testing for Testers
  2.  Tool for mapping the heuristics and oracles to test ideas
  3.  A checklist tool for testing mobile apps and scoring on quality for start-ups
  4.  Testing Depth Dashboards

I would like to share
  1. The thinking behind building these tools and their value
  2.  How purely focusing only on automation is taking away the possibility of building tools
  3. How I would love the community to start building tools
  4. How I can help the audience (without commercial interest) and how they can help me

I would like to tell my own story and how I (actually stealing credit from my team who build it) came to build tools
  1. What I did different from other manual testers?
  2. What problems concern testing space according to me
  3. The problems I picked and the way I tried solving them
  4. The failures and analysis
  5.  The existing opportunity for people to build value different from automation frameworks and scripts
  6. The vision of open sourcing non code deep thinking tools for testing 

Speakers
avatar for Pradeep Soundararajan (IN)

Pradeep Soundararajan (IN)

CEO, Moolya Testing
Pradeep Soundararajan is the Founder CEO of Moolya Testing and AppAchhi. Pradeep is on a mission to build a software testing start-up that solves fundamental unaddressed pain points in testing. 15+ years of experience as a hands on tester, independent test consultant and now a businessman... Read More →


Thursday August 15, 2019 3:15pm - 4:15pm EDT
Sea Oats

4:20pm EDT

Closing
Thursday August 15, 2019 4:20pm - 4:40pm EDT
Sea Oats