Author: Chris Bexon

An Astrophotography Journey using Scrum

A story of learning in the complex domain

This was my first picture of the Great Orion Nebula using Sony Alpha A7RIII IMX-455

Even to get to this I had been through many sprints of failed experiments. Learning so much along the way about what works and what really really doesn’t. Creating incremental value towards my vision.

My first telescope

What was I thinking! Biting off more that I could chew for my first sprint. Long running experiments and big mistakes are costly at this stage of learning. Time for smaller easier experiments. The mount was actually faulty. I didn’t know what I was doing so I had know idea that it would only move in Declination and limited Right Ascension.

CGX-L Mount with Celestron Edge HD

Solving complex problems can feel almost impossibly frustrating sometimes. Its the passion to achieve your vision one small step at a time that gives you the energy to keep pursuing it. Mastering new technical skills that emerge through solving complex problems creates strong foundations and growth.

Vision: Enable people to Explore the Universe from the comfort of their own home

Product Goal: Just get something working

I love learning through experimentation. There’s nothing like failing fast and setting up your next experiment. When it works its pure glee. Every night was a new experiment.

I worked in one week sprints. Even if the weather is poor there always something to be done.

I swapped to a simpler faster lower focal length refractor telescope to use as a learning platform. Namely the William Optics FLT132 f7 giving 920mm focal length. Its basically a big zoom lens. The lens the latest iPhone for instance is f1.8 and 26mm focal length so very fast and wide angle.

For starters there setting up the mount and telescope. Level, balanced, polar aligned, aligned to multiple stars to build a navigation model for the mount.

Every telescope needs polar-aligning to align to the rotation of the earth. In my case the north celestial pole. Once aligned it makes it easier to find stuff in the night sky. This means learning a solid understanding of the earths rotation. Right ascension and declination act as coordinates for finding stuff in the night sky. The sky at night is constantly changing due to the Earths tilt of 23.5 degrees.

Product Goal: Photographing the night sky

At night our eye’s adjust to the dark using peripheral vision. Everything looks black and white and most of the detail in the night sky can be hidden from our poor human vision. I need to collect more photons! That means new experiments. Each image is constructed from at least 50 sub-frame images. This way the signal will out way the noise and provides lovely images. Lots more experiments to understand how this works.

Now that I had understood how to set up and operate the telescope and mount from a computer it was time to start experimenting with taking pictures.

Understanding how stuff works

Many sprints discovering astrophotography

  • Discover what a flattener is. Fit camera and flattener to telescope at the correct distance. 1mm out in any direction results in image distortion.
  • Understanding focal length and back focus distance
  • Creating a flat image. The lens is curved but a cameras sensor is flat! That means correcting with additional lenses
  • Exposure time and gain
  • Dew, Dew, Dew – experimenting with heating. Space is cold so staring at it chills the lens. When that gets to below dew point you get dew. Dew is evil. Dew is bad.
  • Guiding to make corrections to telescope position during long exposures. Basically using a second camera to track a star and correct the mounts position. The better the alignment and model the better the images
  • Then there’s also understanding, creating and using flat, dark and bias images. As it happens camera sensors create noise. Not all the pixels work some pixels are hot and white and some are cold and black. Because the lens is curved light does not fall on the sensor evenly and is darker towards the edges. In a daytime image with lots of light you would not notice these issues. But when the signal from targets is very low then every photon counts.

Product Goal: Eliminate Annoying Issues

Noise

As it happens cameras sensors get hot during long exposures of 5 – 20 minutes. This creates noise. Noise means bad pictures especially in summer. High gain mean faster images but lost detail.

A cooled astro camera can cool the sensor to 35 degrees below the ambient temperature. There’s a learning curve here too. Experimenting with full well depth (yes, literally electron buckets) , gain, temperature, monochrome or colour, pixel size, dynamic range low noise, seeing conditions, arcsecs/pixel, pixel size enabled me to fully understand the capabilities and limitations of these complex cameras.

The scourge of light pollution

It’s a sad truth, but according to a study done by Italian and American scientists, one-third of the world’s population and 80% of Americans cannot see the Milky Way. The Milky Way is an awesome cosmic wonder to behold and to ponder. Even more experiments are need to remove gradients created by streetlighting in the UK. Most of the lighting is not required. With modern technology, LED lighting could be switched on only when somebody approaches. LED’s are way worse than the old streetlights. LED’s light produce light across the whole visible spectrum so difficult to filter out.

Next experiment using various filters to try and tackle the UK’s terrible light pollution and learn about narrowband filters.

The scourge of satellites

Say no more. Thanks to projects like StarLink and zero regulation of night skies, Earth based astronomy science is at risk forever. The only way to remove this is to use image processing to remove the noise.

Product Goal: Stay warm using Automation

I want to be warm. Winter is cold. At this point I need to be with my equipment to move targets and take sequences images of a target. Experimenting with automation using an industrial computer strapped to the telescope via Wifi or ethernet. No laptop in the garden all night. Full Remote Control 🙂

Product Goal: More Advanced Telescopes

Refractors have great resolution but if you want more zoom then a reflector is better. Basically this is a mini-Hubble. There are difficult to use and require more advanced collimation using a laser to line up two mirrors. This required going back to basics with experimenting. To get more advanced to quickly can feel like a backward step. Its a risk that delays value but can be necessary to achieve the vision. It took three months to master the Ritchey Chretien.

Ritchey Chretien on a CEM60EC2 mount

Product Goal: Professional Image processing with PixInsight

Using a professional software package for processing data is a daunting prospect. The results are great however. After a few hundred experiments I’m just kind of becoming a little more proficient and consistent. The image here was constructed from 240 monochrome Hydrogen Alpha and Oxygen sub frames

Product Goal: Think Bigger

During all my experimentation and getting creative I had become very proficient in setting up and consistently taking some great images. But everything was limited by light pollution, wrong kind air flow, wrong type of weather and humidity, seeing conditions which limits image resolution.

After much research moving equipment and setting up in a dark area in Spain or Chile at site used by astro professionals and amateurs alike was possibly the answer. Fregenal de la Sierra in Spain has excellent Bortle scale and everything I need. Its also closer than Chile if I need to visit. Chile is still on the list though.

It felt great setting up in Spain knowing exactly what I’m doing. Using best practices discovered through much exploration to my advantage. Roll off roof, dual telescope setup, with industrial computer control and remote access. 250 cloudless nights a year and very low light pollution and good seeing conditions.

Product Goal: Diagnosing issues remotely

Ah yes, something always goes wrong 🙂 When you can be there its easier to diagnose and confirm issues. Experience but more likely intuition comes into play in solving problem remotely. Autofocus slipping, filter wheel broken, camera moisture levels too high to name a few.

Creating my own software to analyze image quality in real time

With all that I had learnt I was able to start to solve a few automation problems that professional packages lack. I have had some great fun analysing the quality of images in real time including curvature and sensor tilt and correction during in sprinting towards this goal.

Experimenting creates waste! Not experimenting creates nothing good at all

Best advice. Keeping experiments small, goal focused and lightweight creates less waste and optimisation of value vs cost.

Product Goal: Automation for customers targets

Bringing it all together. These sprints were about enabling public access requests to professionally photograph targets of their choice delivering raw data and automated processed images. The images enclosed in the blog are much smaller than the real thing. A real single processed image may contain a gigabyte of HDR data. To reproduce these results starting from scratch could cost 10’s of thousands. With this service results are guaranteed for a fraction of the cost.

Vision achieved

This is why I use Scrum Framework. Small iterations, small increments, maximising value, Learning as quickly as possible while optimising investment. Scrum enables accelerated learning. This Vision took a 18 months to bring to life!

I now have a viable business model where people can request a target and receive a fully automated dataset and a processed image of their target.

Next Vision: Different focal length telescopes and Southern Hemisphere

How will you bring your curiosity and creativity to play through experimentation to solve problems in the complex domain and have fun doing it? To learn more check out https://www.bagile.co.uk/our-courses/ or contact us to discuss agile and team coaching.

The Great Orion Nebula a year after the first
Veil Supernova Remnants

Building healthy team relationships using great feedback

  • What’s the impact on team relationships when feedback is poorly delivered? 
  • What’s the worst feedback you have ever received? What made it the worst?
  • What the best feedback you ever received? What made it great?
  • How comfortable are you in receiving feedback?
  • What’s the impact on relationships with your colleagues when communication is dysfunctional?
  • When the energy of your team and team mates is being impacted by poor communication are you sitting back and letting it happen or taking a lead to create a more positive workplace?

What role does emotional intelligence play in the formation of great relationships and teams?

Over the last 30 years I have seen people being shouted at in front of a hundred people in an office. I have witnessed poor feedback resulting in damaged low confidence people and teams. Leading to subdued creativity, low psychological safety and a lack of confidence to take risks and try new things. essentially, experimentation stops and true team work stops and team relationship conflict rises.

We are all leaders! We all have the ability to lead in our own lives! We can all grow our level of emotional intelligence for the benefit of everybody around us and help build teams with lightness, heart and soul that are truly inspiring.

Team Knocks and Antidotes

The scissor icons represent knocks to relationship health. Every time a knock occurs, without intervention, the relationship takes a step closer to red.

Feel into your current team relationships. What colour would your current teams health be?

The medicine icon represent repair bids, the actions you can take to move you and keep you in the healthy green zone.

Stand up for Team Health

Engage your leader within and show the way. Stand up to poor communication and aggressiveness. Yes that may include yourself. By listening in to your own communication and practicing feedback we can all start to grow our emotional intelligence.

Let take a look at a common feedback method

Please first understand that receiving feedback well can be difficult and that feedback can feel like a social threat. We also know that receiving feedback can be triggering, causing our amygdala responses (fight or flight) to kick in. Given this, we think it’s good to focus on learning to receive feedback well before focusing on giving feedback well.

Context Observations Impact Next – COIN

Context

When you’re giving feedback, put it into context. When and where did you observe the situation? This gives the other person a specific reference point.

For example, you could say:

“When working with the team this afternoon discussing the XYZ solution I observed aggressive personal attacks on the quality of the work completed by the team”

Avoid vague terms like “the other day” or “in that meeting last week.

Observations

Your next step is to describe the specific behaviors that you want to address. This is the most challenging part of the process, because you should only communicate the behaviors that you – and you alone – have observed directly.

Avoid making assumptions or subjective judgments about someone’s behaviors. These could be wrong, and they may undermine your feedback.

For example, The approach to providing feedback is being delivered with sarcasm and demeaning language aimed at skills and experience of the team.

Tip:

Aim to use measurable information in your description of the behavior. This will keep your feedback specific and objective.

Impact

Finally, use subjective statements to describe how the person’s behavior has impacted you, the team or the organization. Use “I” or “we” to make the point.

For example, you could say:

“The impacts on the team I have observed is damaged confidence in tackling complex technical problems and low self-esteem. This is resulting is reduced team working behaviours and is impacting self-management, creativity and safety within the team to experiment”

Throughout the process, emphasize the importance of finding positive solutions, and avoid “passing the buck” or playing the blame game .

Next

Asking about intent encourages a two-way discussion. It can help you to uncover why your team member behaved as they did.

It also gives them a chance to assert themselves and to open up about any problems that they’ve been experiencing. Perhaps they have confidence issues, or they feel that their skills and knowledge aren’t adequate.

Uncovering intent can also help you to address your own false assumptions. Your team member may have had a legitimate reason to behave the way that they did, which you haven’t understood. This can help the initial feedback session develop into a useful coaching conversation 

Next will result in designing Actions and Learning with the coachee.

What’s Next

BAgile offers two courses. The first course, ICAgile Certified Agile Team Facilitator develops advanced skills in managing dysfunctional behaviours, facilitation and lean-agile facilitation. These skills can be used in any situation to have every team bring out the very best of themselves.

The second course ICAgile Certified Agile Coaching further develops skills in professional, relationship and team coaching.

BAgile offer Co-Active Professional and Leadership Coaching and Relationship and Team Coaching. Please contact us for further details on professional services.

https://www.bagile.co.uk/our-courses/https://www.bagile.co.uk/our-courses/

Product Owners want a perfect Definition of Done. Here’s why.

Universe starts and galaxy

First it’s worth reading this section of the Scrum Guide https://scrumguides.org/scrum-guide.html#increment to reacquaint yourself with the formal definition.

Below is an example list of activities that represents potentially releasable. If all activities were completed the product could be released to a customer.

Definition of Done

The Definition of Done is an agreed list of criteria that the product will meet for each Product Backlog Item. The Definition of Done applies to all Product Backlog items. If more than one Scrum Teamwork on the product they share the same Definition of Done

The initial Definition of Done must be created and agreed before the first sprint. Its forms an input into Sprint Planning to guide the Scrum Team on what tasks they’ll need to perform to turn Product Backlog Items into a potentially releasable increment each sprint.

To do this define what activities are needed to release to end customers. We’ll call this list “Potentially Releasable”. With this list then define which activities can be done each sprint. This forms the “Definition of Done”. The difference between the two lists is undone work. The undone work must be completed at some point before release. This is not partially completed work.

Weak Definition of Done

If the Definition of Done only contains the underlined items from the Potentially Releasable list then the following behaviours will be observed:

Iteratively and incrementally a product is built according to the weak Definition of Done. This leaves undone work to build up each sprint.

  • The impact of this on the Product Owner and organisation is that they cannot release the product until the undone work is done
  • The undone work builds exponentially making it harder and harder to forecast likely completion dates. Transparency and visibility is reduced as we don’t really know where we are in development.
  • If we defer releasing to customers to later sprints it increases the risk of building the wrong features.
  • If we cannot release it reduces the Product Owners ability to adapt to risks and opportunities as they are not able to change strategic direction. Release of value and validation is delayed. Value is diminished
  • If testing and validation is deferred to later sprints it increases the risk of poor design and technical debt leading to rework. Imagine if we defer Performance or Acceptance Testing for 4 sprints. We are not learning whether the product is sufficient to meet the service level agreements. The amount of rework could be extensive.

This leads to results very similar to waterfall

Work towards a perfect Definition of Done == Potentially Releasable

Scrum relies on transparency. Decisions to optimize value and control risk are made based on the perceived state of the artifacts. The closer we are to done each sprint the better decision we can make for the next.

This graphic represents a perfect Definition of Done.

  • At least once per sprint we have met the Definition of Done.
  • If the Product Owner wants to release to the customer they can.
  • No undone work remains at the end of sprint. All activities are done in the sprint
  • Transparency is high as we always know where we are up and are able to forecast our trajectory towards product goals. We can make decisions about what’s next and change strategic direction when needed without being dragged down by undone work.

Scrum will shine a light on organisational impediments in the way of agility. Defining Potentially Releasable will help identify people, technology, domain, internal and external dependencies that hold back agility. The Definition of Done is inspected and adapted sprint by sprint becoming closer to Potentially Releasable. This will involve breaking down organisational boundaries and removing dependencies which further increases the maturity of Scrum Teams increasing their cross-functionality, decreasing the complexity of the organisation and its products over time.

Thanks for reading and if you want to learn way more about product ownership, scrum mastery and product development check out our courses.

Component Teams vs. Feature Teams

Infographic showing the comparison of Component and feature teams

The Scrum framework doesn’t specify whether a Scrum Team is a feature or a component team. Only that we have a done integrated tested increment at least once per sprint. It’s important to understand the difference between these constructs and their advantages and disadvantages. There are a few disadvantages to Component Teams that can prevent organisations gaining greater agility. Let’s take a look.

Component Teams are:

  • Teams organized for technical layers or technical components.
  • Many teams needed to turn a customer-centric feature on Product Backlog into a releasable Increment
  • Horizontal slicing; work is divided by technical layer or technical components.
  • Teams are only together for the duration of the program-project lifecycle. May be involved in multiple projects.
  • Lots of coordination needed to integrate an potentially releasable increment.
  • Very difficult to understand where we are. Lack of transparency
  • Requires project management

Feature Teams are:

  • Long lived, work on many features together over time.
  • Cross-discipline, cross-component. Each feature team has all skills to turn Product Backlog into releasable Increments.
  • Vertical slicing; work is divided by end-user functionality.
  • Work is integrated continuously within each Sprint.
  • Transparency ensured; no unknown, undone work. Potentially releasable increment at least once per sprint

Some or all of the following behaviours will be observed with Component Teams

Although sometimes necessary make them the exception rather than the rule.

  • Leads towards or reinforces waterfall process and holds back breaking down of organisation silos like Business Change, Architecture, Development, Test, Ops
  • Leads to water-scrum-fall, scrumer-fall. Something that uses the language of Scrum but definitely isn’t Scrum
  • Release of value is delayed, validation delayed
  • Lack of end-to-end accountability for the customer leads to lack of creativity and self-organisation. Kills intrinsic motivation
  • Increased risk
  • Facilitates big up front design
  • Delays learning, delayed functional and non-functional testing
  • Increased hand off waste and delays
  • Project task switching impacts work and morale
  • High technical debt

Some or all of the following behaviours will be observed with Feature Teams

  • Leads to customer and business focused organisations
  • Leads to iterative and increment release of value and validation
  • Maximised value
  • End-to-end accountability
  • Facilitates emergent design
  • Encourages creativity, intrinsic motivation
  • Enables self organisation
  • Shared code ownership promotes good engineering practices clean code, CI, CD, refactoring, automation, testing. Higher quality = lower cost of ownership
  • Decreased risk. Deal with high risk items and deploy to production
  • Ensures transparency
  • Flexibility and Stability.
  • Can focus on the flow of value
  • Can learn from each other and cross skill

Are your products destined for the scrap heap?

Infographic illustrating a loss

Ward Cunningham, one of the authors of the Agile Manifesto , once said that problems with code are like financial debt. It’s OK to borrow against the future, as long as you pay it off.

Since Ward first used this metaphor, which he called “Technical debt” has gained momentum. While people still disagree about the exact definition of technical debt, the core concept identifies a serious problem that many delivery teams are struggling to manage.

Technical Debt has a visible and an invisible element. Businesses are aware of the visible part and monitor bugs but it’s the invisible element that kills business agility.

Am I working with an unhealthy product with technical debt? How would I know?

Let’s have a look at a few behaviours you may recognize:

  • Weak definition of “done” where its not well defined and represents quality for the product
  • Slowing rate of productivity and increased cycles times to add similar sized new features
  • Stressful releases, as the development team goes into to crunch mode as they work through lengthily change windows, executing scripts, manually deploying and configuring components, investigating incidents, fixing defects
  • Applications we are scared to touch and have aging libraries and unnecessary dependencies
  • Increasing time to dealing with incident investigations and fixing defects
  • Low automated testing, or no extensive test suite where we are forced to test manually which can be repetitive, slow and error prone
  • Fragile and tightly coupled components due to violation of good design principles, duplicated code, tangled architecture & unnecessarily complex dependencies.
  • Lack of test, build, and deployment automation, plus anything else that could be automated that you do manually today
  • Long feedback loops from Continuous Integration. Build, Test, Deploy
  • Slow, ineffective tools
  • Long-lived branches causing merging hell and increased delivery risk through late integration
  • Important technical documentation that is missing or out-of-date
  • Unnecessary technical documentation that is being maintained
  • Missing or not enough test environments causing delays in delivery and/or reduction in quality

With technical debt, a cumulative flow diagram showing open and closed defects over time might look like this

Defects or bugs can tell you a lot about the state of your product’s quality. Great scrum teams fix bugs as they are found and don’t let them accumulate. If the number of open or escaped defects is trending up over time it’s an indicator of lower quality and technical debt.

In the cumulative flow diagram, we can see over time that the average number of open defects (vertical) is growing over time and the average time to fix (horizontal) is increasing.

What are the impacts of Technical Debt?

Let us first have a look at the cost of change from a technical perspective. The assumption here is that the red waterfall curve is using BDUF where the design is perfected before construction. This is highly unrealistic in the high novelty complex adaptive domain of software delivery. In software, delivery design is only validated through working (releasable) software. Hence, we see that the red line loses the ability to change rapidly over time.

At the other end of the scale, we have perfect quality in purple. Using professional scrum the design emerges each sprint, the definition of done (releasable) is achieved at least once per sprint. The design is frequently refactored and improved and all sources of technical debt are kept to zero.

The blue line is what happens to quality when the sources of technical debt are not dealt with each and every sprint. With every sprint, poor design decisions impact future decisions and compound each other.

Unreasonable cost of ownership

It should come as no surprise that the blue line is the most common scenario we see today. At first, the impact is low but soon starts to build momentum. The ability to maintain, support and add new features to the product becomes more complex and more expensive over time. This leads to a higher cumulative cost of ownership.

For example, if we built feature A in month 1 and the same feature A in month 18 with a good level of quality where the development teams are maintaining a sustainable pace then the cost of developing that feature would be comparable.

With high technical debt where the development teams are fighting against the design, the costs will grow rapidly over time.

This means:

  • The cost (or difficulty) of change increases, eventually to the point of unmaintainability
  • The ability to respond to the needs of customers decreases, making them extremely unhappy
  • The predictability of results decreases. Estimating effort and complexity becomes more difficult. This decreases transparency and can impact trust.

Who suffers in this vicious circle of deceit?

  • Customers face defects, missing features, crappy service resulting in lower customer satisfaction
  • 1st line support teams will create more incidents this will cascade and create increased demand on operational support teams. More incidents and defects means more development time spent investigating with quick fixes and release patching. This results in increasing operational and capital expenditure.
  • Products with poor design are more complex than needed, have more dependencies, require more infrastructure, more development time, more support time to run
  • The organisation, teams, and leadership get bad publicity due to defects, delays, security issues, or outages. Organisations suffer increase cost through eventually needing to rewrite products as they’re no longer fit for purpose.
  • Development teams must deal with the bad work of other developers which may cause attrition and loss of talent. Can cause “broken window” syndrome. It’s in a mess already so why fix it.
  • As it gets worse customers complain about slow delivery which in turn increases the pressure to take more shortcuts, which increases the technical debt. In essence its a rapidly spiralling eternal return.

Options for dealing with Technical debt

Unfortunately, by the time organisations are paying attention none of the options are good.

  • Do nothing and it gets worse
  • Re-place/re-write the software (expensive, high risk, doesn’t address the root cause problem). Rinse and repeat in around 2 years time
  • Stop creating more technical debt and systematically invest in incremental improvement.

Courses

b-agile’s Professional Scrum courses Professional Scrum Master course discusses the concept of Done and Technical Debt to a level needed by Scrum Masters and Leadership teams to be able to educate the organisation in the impacts and how to manage it.

Checkout bagile.co.uk/psm

b-agile’s ApplyingProfessional Scrum for Software Development courses gives an experience of how to deliver quality software with Scrum and DevOps practices. Understand how modern Agile engineering practices and supportive DevOps tools improve a team’s capability to deliver working software. Gain knowledge of how to leverage modern software development tools and practices.

Checkout bagile.co.uk/aps-sd

Coaching

b-agile has Technical Coaches that have extensive experience in helping organisations reduce their technical debt and gain the mindset, culture, skills to enable business agility. If you’d like to know more or how we can help you or your teams build healthy products then check out our Technical Agility services. Or contact us to get in touch.

In our next post, we’ll look at a few ideas of how to turn the ship around

How to pass the Scrum.org Professional Scrum Master PSM I assessment

how to pass the Scrum.org Professional Scrum Master PSM I assessment

The Scrum.org Professional Scrum Master assessment consists of:

  • 80 questions
  • 60 minutes
  • Online Multiple Choice

Subject Areas

  • Scrum Theory and Principles
  • Scrum Framework
  • Coaching & Facilitation
  • Cross functional self-organizing teams

Preparation

Read the scrum guide http://www.scrumguides.org/ line by line and number of times. 5 would be good. This text has been refined over many years and every word has value. Understand each sentence in-depth in terms of events, roles, artefacts and rules that bind them together. Understand inspection, adaptation and transparency and the Scrum values that create the foundation of Scrum.

The PSM 1 assessment will look at questions wider than the Scrum Guide. You will need to be familiar with topics such as the impact of technical debt, scaling Scrum, complementary practices like velocity, user stories, burn charts and many others. Understand the difference between what is Scrum as defined by the Scrum Guide and what is not Scrum but often associated with it.

Open Assessments

Scrum.org have created open assessment practice tests for different subject areas. Our advice is to take them all to experience a wider selection of questions to enrich your overall understanding of Scrum. When you get a question wrong, note the correct answer and assess why you answered it differently.

Our advice is to be passing the Scrum Open at 100% repetitively before taking the PSM I.

You will need to create an account with Scrum.org before you can take the free open assessment for practice

Take the free open assessments:

More information is available at Scrum.org Scrum Master Learning Path and Scrum.org Professional Scrum Competencies

Possible Reading List

  • Scrum – A Pocket Guide’ by Gunther Verheyen
  • Scrum Mastery’ by Geoff Watts
  • Servant Leadership’ by Robert K. Greenleaf
  • Coaching Agile Teams by Lyssa Adkins
  • Software in 30 Days by Ken Schwaber and Jeff Sutherland

Further reading:

  • Lean Change Management’ by Jason Little
  • Reinventing Organizations’ by Frederic Laloux & Ken Wilber
  • The Nexus Framework for Scaling Scrum’ by Kurt Bittner, Patricia Kong & Dave West
  • The Surprising Power of Liberating Structures’ by Henri Leipmanowicz & Keith McCandless
  • The DevOps Handbook’ by Gene Kim, Jez Humble, Patrick Debois & John Willis
  • The Professional Product Owner’ by Don McGreal and Ralph Jocham
  • Product Mastery by Geoff Watts
  • The Product Samurai by Chris Lukassen
  • Reinventing Organizations by Frederic Laloux & Ken Wilber

Couple of useful blogs

Are you ready

If you already have some experience in Scrum you may feel comfortable to take this challenging and rewarding certification. If you don’t feel quite ready and feel you want to learn in a team based, collaborative, and transformational learning environment then see our class listings here bagile.co.uk/psm.

We have a very high success rate record in our course attendees leaving either our PSM or APS courses with a deep understanding and knowledge of Scrum, and we also offer free after course support, to enable them to pass the PSM I assessment.

Taking the test

  • Have a good stable internet connection
  • A place where you won’t be disturbed for 90 minutes
  • Recommend completing an Scrum Open Assessment before taking to help get focused
  • Take at a more energetic time of day
  • Recommend not taking late at night, or after a busy day or after a glass of wine!
  • If you get stuck on a question give it your best guess, bookmark it and move on. You can go back to the bookmarked questions later.

Hope this post aids you in your preparation for the Scrum.org PSM I assessment. Good luck from the BAgile team.