Code Intelligence Platform cross-product owner
- Slack channel: #cp-code-intelligence-platform
- Owner: Joel Kwartler
- Owner split with other roles: 75-25 (this-other roles)
Overview
Mission and ownership
The code intelligence cross-product owner focuses on all product, design, and engineering work necessary to achieve our 1-year vision.
This ownership includes:
- Coordinating the product, engineering, and design teams to align on strategy, goals, and roadmaps for
- Guiding product research and synthesis
- Creating new channels for feedback and research
- Maintaining a deep understanding of the market for similar devtools and functionalities
Team
- Joel Kwartler is the DRI and owner for the success and outcomes
- Quinn Keast is the design lead: the first point of contact for direct design feedback, coordinating design feedback and communication, and design domain expertise
- Loic Guychard is the engineering lead: the first point of contact for direct engineering feedback, coordinating engineering feedback and communication, and technical expertise
- Rob Rhyne was the owner and coordinator of the Cancun kickoff workshop on Sep 14
Process and Timeline
See the Process and timeline page.
Shared terms
Mission → Vision → Strategy → Execution (Goals, Roadmap)
These are shared terms so we know where we are and are thinking about things at the right level together. This won’t go well if we think we’re all working together, but actually some of us focus on strategy while others focus on roadmaps while others focus on vision.
Our mission answers: what is our long-term goal? [Done for us already]
Our 1-year vision answers: what will the world look like in a year if we are on track to achieve that goal? [Mostly done already]
Our vision is only partially defined, until we define what a “code intelligence platform” is. That will be our first step.
Our strategy to achieve that vision is: how do we make our vision a reality? [TODO next]
This is going to be determined via this project. See Good strategy for what a strategy must have.
Our goals are: how do we know we’re making progress on our strategy? [TODO next]
This is going to be determined towards the back half of this project. See Good goals.
Our roadmap to achieve those goals is: how will we achieve those goals? [TODO next]
This is going to be determined towards the back half of this project. See Good roadmap.
Hierarchy of terms
To set up for success in 2023, we need all these steps. But we are not going to try to do any of them at the same time – they need to be sequential.
We will start with ensuring our vision is defined.
Then we will create our strategy.
Finally, we’ll move to specific goals and problems to solve on the roadmap.
Success metric
When the first phase of this project concludes on Jan 1 2023, it is successful if we have:
- A defined strategy to create the code intelligence platform.
- Goals and a roadmap to execute the strategy.
- Started design and engineering exploration and testing on the roadmap.
Note: we won’t expect these outcomes to be written in stone – if we’re learning and iterating successfully into 2023, it’s likely our roadmap will adapt and change (especially as we get further into the future, like -).
Each of these terms deserves its own strict definition of success, as follows.
Good strategy
We’ve arrived at a good strategy when we have something that is:
- Focused
- Easy to communicate to teammates and to customers
- Not trying to be all things to all people, but makes concentrated deep bets
- There should be very clear answers to “things we are not going to do”, at the level of detail where we can keep a running list as necessary
- Contains only the critical path: if we could still achieve our vision without succeeding at [X], [X] is not part of our strategy
- Actionable
- Teams understand the shared vision and their role in delivering on it, and are all positioned to succeed
- We can answer to key prioritization and strategic questions based on our strategy
- Problem-oriented
- It’s clear how we solve the problems preventing our vision from becoming reality
- It’s clear how problems are related or dependent
- Insight-driven
- Supported by user research
- Supported by data
- Solving the right problems
- Cohesive
- The pillars of our strategy relate – we’re not trying to achieve two or three unrelated strategies at once
Good goals
We’ve arrived at good goals when we have goals that are:
- Iterative
- They’re fast to measure and re-measure
- The faster/shorter timelines we can set on goals the sooner we know if we’re not on track
- Specific
- Scoped to only the goals/problems needed to achieve our strategy
- Actionable
- We can solve the Drive our strategy
- Focused on outcomes (not outputs)
- Example: As we move to MAU pricing, it’s likely we have MAU metrics as a key KPI in many goals
Good roadmap
We’ve arrived at a good roadmap when it:
- Is validated to very likely achieve our goals
- (We validate our roadmap during this planning period)
- Matches our timeline
- Meaning: a 9-month series of sequential problems (and features, for the nearer-in-time months we’ll build immediately starting Jan 1; it will get fuzzier the further out in time you go)
- Maps to owners, drivers, and teammates
- Is publicly shareable with customers (and anyone)
- As a result, this means we must also focus on quality, confidence, and storytelling
Principles
In addition to our applicable product principles and design principles, these are principles specifically for “what is Joel Kwartler doing as DRI” and “how can we work collaboratively to ensure we most effectively use the next three months”?
-
Operate with autonomy and expertise
Joel Kwartler is the DRI for how we arrive at our strategy, not what our strategy is. We are all experts in various aspects of our users, features, and opportunities. This an initiative everyone will need to take an active part in.
Consider that the work we do now sets us up for success in the following 9 months across our entire engineering department, and make sure this is a “high priority” on your personal todo list. It won’t be possible to ignore this process and still successfully contribute to achieving our vision.
We will push decisions down: we want to define user outcomes and overall success metrics as a product/design team, but leave tactical planning decisions to each product-eng unit to reduce overhead. Planning work must focus on getting a prioritized list of customer outcomes and leave space for teams to figure out most details and iterate.
-
Build a shared point of view
We should effectively share the updates, learnings, and progress made every week across the team. Those updates will be posted in #cp-code-intelligence-platform. We will keep room for discussion/followup/questions/disagreements (all useful).
When we are at the research stages, teams will share a quick 1-paragraph written summary and 120-second loom with key findings (that’s ~7-9 teams so ~20 minutes of updates). Joel will then share a 120-second overall loom each week for less-involved stakeholders to get the highlights and to synthesize. This will let us avoid doubling efforts and ensure we all benefit from everyone’s work
-
Create strong convictions, weakly held
This is especially important here: we need to believe in something strongly to pursue it, and pivot quickly if we invalidate it.
For the sake of feedback, we should err on the side of picking any direction when we aren’t sure, and then seeing what reactions/feedback we get (vs leaving things open-ended too long).
-
Be tactical and creative with our customer research
Examples of things we will do: more customer slack channels focused on product feedback; a product advisory board; synthesize previous research based on a new platform-focused lense.
Roles of Product, Design, and Engineering
Product’s role
As we plan this project, there are a few product-related risks to avoid:
- Failure to communicate and collaborate; becoming silos. We have a complex product, and any “platform” vision will involve all of our efforts. We need to ensure we’re not pursuing contradictory goals, features, or visions.
- Failure to operate “at the same level.” We will confuse ourselves if one person is focusing on features/roadmap while another focuses on strategy. We need to maintain a consistent level of fidelity across our efforts and advance to each level (roughly) together.
- Failure to decide, or getting stuck in analysis paralysis. Having deliberate planning time can lead folks to slow decisions and always want to validate “more” before pursuing a path. We can avoid this by keeping ourselves honest (when we’re at enough information to decide), sharing in validation efforts (to get to that level of information faster), and acknowledging that it’s better to make a decision and invalidate it or iterate rather than make no decision at all. We thus want to set specific timelines for each stage of this process.
At a high level, the product team will be involved in some of these ways specifically:
- Owning the timelines and outputs of each stage. The product team owns ensuring we keep to our plan at each stage, whether those outputs are explorations, syntheses, or specific plans (for many of the early stages when we’re all together, this will be Joel Kwartler. As we start to do more independent work to synthesize together each week, every PM will own that.)
- Ensuring we prioritize the most impactful directions. As we discover multiple paths for research and exploration, the product managers will prioritize which we focus on (with design’s input, but product teammates are the deciders), based on inputs like:
- The use case / job to be done (Desirability).
- The technical tradeoffs (Feasibility).
- The alignment to our strategy, business, and overall product (Viability).
- Product teammates may not own each of those inputs, but they are responsible for collecting them to the fidelity necessary for a decision.
Design’s role:
As we plan this project, there are a few design-related risks to avoid:
- Difficulty planning design resources when problems are unknown. The actual problems to be solved are as of yet unknown. We need to do the research, synthesis, and strategy to define what those problems are, and from there, we will be able to go do actual design exploration, testing, and validation.
- Difficulty validating design directions with a high level of confidence. The nature of Sourcegraph and the problems it solves for users—today and in the future—means many design directions cannot be validated with interactive Figma prototypes alone, but rather need to be prototyped in code (e.g. entirely new whole-product changes like the search home page (vs individual product pages).
- Simultaneously avoiding big-design-up-front and feature silos. There are likely two “tracks” of design:
- A higher-level track where we rethink the foundation of information architecture, structure, and interaction models to evolve from code search towards code intelligence
- A product-feature track where teams evolve existing product feature areas to solve newly-identified problems while also integrating with the outcomes of the higher-level track.
- The higher-level track represents significant design risk (is a dependency for a lot of other design work; reflects a conceptual pivot in a product that customers already pay for in its current form), while product-feature tracks represent less risk, but still a lot of complexity in avoiding feature silos.
At a high level, the design team will be involved in some of these ways:
- Ensuring we use research and validation best practices. The design team and design partners will guide our research methods and synthesis, not just as direct contributors but as advisors to the processes each team pursues.
- Owning design problem definition, exploration, and validation. As we identify problems and opportunities, design will be responsible for making sure that these problems and opportunities are clearly defined and actionable, will drive design exploration, and will plan and lead validation efforts. As we’ll be operating with limited engineering resources, many design directions we take into validation will need to rely on design resources to define and create prototypes.
- Managing expectations and outcomes. At the 3 month mark, we won’t necessarily be executing on high-quality, production-ready design outcomes; instead, we will likely be continuing to prototype and validate, in code, together with engineering. Design will ensure we don’t move our designs too far ahead of engineering so that work is wasted or must be redone.
Engineering’s role
As we plan this project, there are a few engineering-related risks to avoid:
- Failure to involve engineering perspectives end to end. The code intelligence platform effort is a fundamentally cross-disciplinary effort that relies on product, design, and engineering contributions throughout. The overall discovery and delivery phases help us to think about the activities that take place and which domains will be contributing the most—but don’t imply any limitation to specific domain involvement.
- Failure to consider technical feasibility, viability, and problems. This effort represents significant technical risk and validation. We cannot conduct research or define our strategy and problems/goals with only product and design perspectives.
At a high level, key engineers will be involved in some of these ways:
- Providing input on feasibility, complexity and team impact of the directions proposed by product and design.
- Building prototypes for validation, as applicable.
Known project risks
-
Strategic risk
- What other technologies and products must be taken into account as we form our strategy?
- What things could change about the devtools environment or general developer market that would invalidate our strategy?
-
Process risk:
- Over-correcting on either current or future use cases:
- If we were to focus too heavily on pulling other features forward, we risk our core use case and business line of ‘code search’
- If we fail to map customer and user needs we do not currently serve, we risk not innovating far enough along the lines of creating a platform.
- We over-correct to treating this exercise as a blank slate and disregard previous thinking
- We should bring in previous thinking at the right time (not bias us too early, not forget it til too late)
- We will tap into folks on eng who have thought about this before as part of this research – if only to get their perspective on the definition of “platform”, the sort of value it can/should bring, and state of the art / gaps in the industry (not just “what can we build”).
- We under-correct and do not uncover meaningful improvements beyond our previous roadmaps that we just re-piece-together
- The best way to avoid might just be to let problem / concept definition be a strict prerequisite to ideation on solutions.
- We design and ideate at too high a level (eg. the UI level), because we’re comfortable with it, without digging deep enough into whether we have the technical fundamentals to enable what we design.
- We let the constraints of our current technical fundamentals dictate what we design / ideate, and don’t dig deep enough into whether working on the technical fundamentals themselves could unlock a fundamentally different solution space in the end user experience / value.
- We communicate decisions without the right level of clarity (or lacking their level of validation) and end up confusing stakeholders not as closely involved in the strategy work
- Over-correcting on either current or future use cases:
Resources
- Process and timeline
- Initial ‘plan for the plan’ google doc
- Cancun Design Workshop and outcomes of the workshop in #cip-workshop