Monday, October 19, 2020

CI/CD Process

Below is the Entire CI/CD process based on GitFlow. This shows branching strategy, various test and deploy cycles and process for releasing HotFixes in Production.

Hopefully this will help projects to get started setting up/validating their own process. If you follow any variations, please comment about those.






China Mobile App Distribution

Having experience working on Web and Mobile Applications for users in China, below are some findings which may be useful to host a Mobile App in China Markets.

If you do not have an organization branch setup in China, it’s advisable to engage with some Agency in China that can help smooth your distribution.

A note on wechat - Consider using wechat platform instead of creating a Mobile App. Wechat platform supports creating Apps and delivering to users. Wechat is almost in all phones in China so may be more far reaching then trying to push your own App to users phones.

Given uncertainties, we ended up engaging another agency in China to do this for us. If you still like to go ahead, then below are the findings I have from working on a Project. 

  • IOS does not have any special China specific Store. It’s one single global IOS App Store, so we can publish Mobile Apps to IOS in China without worrying about anything.
  • Android has several China specific stores. Popularity wise, app should be published on Top 10, if not Top 30 Android app stores. To publish a Mobile App for Android, one need to have a Business Registration in China.
  • Android App, if not free, requires to open a Bank account in China as well under the same Business Registration.
  • In addition, Android App stores require us to obtain “Copyright certification and registration number”. It’s difficult to get this from government if company is not a local China company.
  • API or Website Hosting in China is not mandatory, however, for performance reasons it’s advisable to Host websites in China. The China Firewall and infrastructure of outgoing traffic from China may slow down website hosted outside china. Next best location, if not in China is HongKong to host websites.
  • For website hosting, to get a China domain i.e. .com.cn, ICP Registration is required.
  • For website hosting, to host in China also needs ICP Registration.
  • eCommerce websites with bank account outside China, and seeking money from Credit cards from people in China may face issues as Credit Card payment to bank accounts that are outside China may not be supported for all types of Cards (not very sure on specifics of this aspect).
If I missed out something, please comment and I will add here.

Wednesday, February 6, 2019

PowerApps vs Custom Mobile Development

Overview

Recently did an evaluation on when to use Microsoft PowerApps vs making a Custom mobile Application. Below are the findings from that study. A caveat is that I have just developed a couple Model Driven App and did a Udemy Course for an overview and am not an PowerApps developer into this work. As there is no such article till date which guides through this topic, I thought of creating one. If you happen to be an expert in PowerApps and finds something is wrong, please do send a comment and I will update the Article so that someone can benefit.

What is PowerApps?

PowerApps is a suite of apps, services, connectors and data platform that provides a rapid application development environment to build custom apps that connect to your business data stored either in the underlying data platform (Common Data Service for Apps) or in various online and on-premises data sources (SharePoint, Excel, Office 365, Dynamics 365, SQL Server, and so on).
Apps built using PowerApps provide rich business logic and workflow capabilities to transform your manual business processes to digital, automated processes. Further, apps built using PowerApps have a responsive design, and can run seamlessly in browser or on mobile devices (phone or tablet). PowerApps "democratizes" the custom business app building experience by enabling users to build feature-rich, custom business apps without writing code.
PowerApps also provides an extensible platform that lets pro developers programmatically interact with data and metadata, apply business logic, create custom connectors, and integrate with external data.
[This above section was copied from Microsoft documentation. My analysis is in below sections]

When to Build using React Native (or Xamarin etc.)?

  1. You need your application to be searched within Apps Store and Play store directly.
    • PowerApps run in Microsoft PowerApps program and does not come with standalone application as such in Apps Store or Play Store.
  2. When you need Native performance (5 to 10 seconds compared to 2 to 3 seconds)
    • PowerApps is based on open source version of the software called Apache Cordova
  3. When you want to have users outside your Organization
    • You can't share an app with a user or group outside your organization.
  4. When you need to use Native capabilities, e.g. open Camera within a section of UI with custom buttons or a different looking date picker or open multiple images from mobile Gallery
    • PowerApps provide access to Camera, Geo Location, Video, Barcode scanner, Microphone, Audio, etc, but experience cannot be customized beyond a level.
  5. When you have unlimited users or PowerApps cost is prohibitive

When to Build using PowerApps?

PowerApps shines when Business Users can create their own apps without or with little developer intervention for there day-to-day needs and can connect to back-end data in Dynamics/SharePoint/Salesforce/Excel Online, etc. This allows for Innovation within firm and increased Productivity. Actions on PowerApps App can also trigger any Workflow created in Microsoft Flow to Automate sending emails, create event in EventBrite, set a flag in SharePoint, etc. for Automation of manual processed being done by Business.
PowerApps is a platform that offers a pre-defined set of Features which cannot be extended beyond a certain degree. Before building PowerApps, recommendation is to draw the full UI/UX and confirm that all features will be met by PowerApps. Without this analysis done, there can be major road-blocks in development.
Refer to Power App Ideas for type of requests which are being requested by users to include in Product.

Build PowerApps when:
  1. When points mentioned in section "When to Build using React Native" is not applicable
  2. When a "Citizen Developer" in Business/Data wants to quickly demonstrate a proof-of-concept Application without developer involvement. When Application requirements cannot be met via PowerApps alone, then it can call for a new Build in ReactNative.
  3. When app is to be distributed across people in Organization only
  4. When accessing Organization only assets, using PowerApps with built-in connectors can speed-up the development, e.g. Excel Online / Building SharePoint forms / Salesforce / PowerBI etc.
  5. Also, refer to PowerApps Use Considerations section below. 

PowerApps Use Considerations

  1. To retrieve data from On-Premises, e.g. SQL Server, VMs, etc., need to install Gateway Software. The query is routed via Azure Service Bus. Gateway cannot be installed on Linux machines. Also, requires a machine with 8GB RAM. Also, need to open outbound ports.
  2. Cannot create Custom Controls: PowerApps provides lots of Controls including Rating Control, Video, Charts and they can be Customized. However, if there is a need for a Custom Control which is not provided, then it cannot be built. Its important to have full visibility of UI/UX and see if requirements can be met with out-of-the-box Controls or not.
  3. Role based security Supported and can be applied to UI to show / hide / disable controls by either integrating with Microsoft Graph API or else by calling Custom API to get user roles and setting Visible property = If(“Administrator” in MyGroups.displayName, true, false)
  4. Notifications Supported: Powerapps Notification connector you can to send various notifications that directly target your apps. 
  5. Maps Supported with a Caveat: No in-built Map control, but Image Control can load the Maps that provide Static Map Images via API. E.g. Bing Map / Google Map and OSM Maps have API which can return png images as output of their API. Zoom In/Zoom out thus have to be coded as there is no in-built map control. Advance feature of Map if required needs to be evaluated further if they are supported by map API or not. As an example look at functionality provided by Static Map API from MapBox here - https://docs.mapbox.com/api/maps/#static. Google and Bing Map APIs also support returning image files. The map is displayed as a image, so there is no interactivity, i.e. you can paint a marker or a geojson layer via API, but on click of a particular Marker if something should happen then it cannot be coded.
  6. Pagination is in-built for few controls and few data sources. For those where it’s not supported by default involves writing API and calling via Connector for API.

Other No/Low Code Implications
  1. Mostly, if we are to build apps which connects with Complex database with multiple joins, it will require API level coding. No code can only be useful when consuming / updating sources like SharePoint Lists, Excel Online, OneDrive, etc.
  2. Editor Nuances - As there is no code and if you have 10 screens for which you need to change Styles, you cannot do a find replace, but have to go to each control and update Style property in Properties window.
  3. As there is no code, a UI cannot be changed at same time by two developers as it’s not possible to merge the changes. The platform however can maintain versions of Applications and the versions can be reverted. Its also possible to export entire Application as a zip file and these files can be placed in Source Control.
  4. Excel like language e.g. to Navigate to another screen - Navigate(ProductsScreen, ScreenTransition.Fade, { selectedSection: Dropdown1.Selected.Value })

Overview of creating Applications in PowerApps

How to connect PowerApps to Custom API?

PowerApps DevOps Activities

Source Control - PowerApps are version controlled as a whole. i.e. you cannot revert a single screen change, but need to revert whole Application to previous version. The entire PowerApps App can be exported as Zip and committed to a Source Control environment if required which may be of little use.

Continuous Deployment - [A Proof-Of-Concept needs to be done to validate this as not enough documentation] Package Deployer Tool can be used to Export PowerApps as a zip and then Import into another environment. It can be scripted using Powershell and thus can be included in Azure DevOps pipeline. 
Monitoring - PowerApps and Flow can be monitored manually from there Admin Consoles. In addition, Custom Monitoring can be coded in PowerShell. Refer this for some thoughts.
Automated Testing - "Easy Repro" is the tool which is developed by Microsoft on top of Selenium to perform Automated Testing for PowerApps. There are not many articles at present describing its use or limitations. Refer this.

Conclusion


PowerApps is a Rapid Application Development platform which shines when Business Users can create their own apps without or with little developer intervention for their day-to-day work Automation providing them access to Organization data. This allows for Innovation within firm and increased Productivity.

Thursday, April 21, 2016

Single Virtual Machine in Azure - How to achieve maximum availability with minimum cost?

You already would know by now that if you host Single VM in Azure, there is no SLA provided by Microsoft.

What does that mean? It would mean that if this machine goes down, the time to bring this up is not guaranteed.

Disclaimer:
After reading whole Article, you may say that the solution is not using 1VM. And you are correct, this article suggests alternative means at almost the cost of 1VM and if you originally decided to use single VM then it answers few questions to justify that decision. I hope it will be useful read.

Why can the VM go down?
  •  Hardware Failure on the rack in DataCenter where VM resides
  •  DataCenter down (most likely due to terrorist attack, war, power grid failure, natural calamity) 
  • AppFabric internal maintenance (more on this below)


What is AppFabric Internal Maintenance?
AppFabric may need to upgrade its environment for major updates. This happened thrice in year 2015 in Azure China (as confirmed from an Azure China support personnel). The maximum downtime during this upgrade is 15 mins. However, this 15 mins is not guaranteed and in addition, it can also happen during the day time (peak times) and not necessarily when no one is using your application.
You would be made aware of this scheduled maintenance 1 to 2 weeks in advance.

How to achieve maximum availability with minimum cost?
We can answer it now. But let’s combine this question with one more question.

Why should you backup your Azure VMs?
  • Chances of it being corrupted while you are upgrading your custom software installed on it 
  • Chance of it being corrupted while you are upgrading third party software on it eg. OS / SQL Server 2014.
  • Malicious employee or hacker erasing some key system files


What are the options when VM gets corrupted and we want to fix it?
  • Install new OS if that’s the reason it got corrupted
  • Install your custom software if that caused the corruption. You would most likely get latest from your source safe repository Release branch.
  • Install any accompany software if VM crashed, eg if your custom software is Drupal based and VM crashed, it may require you to install Linux, Apache, Drupal, MySQL, etc.


Key is the time it takes to bring your VM up. Are you okay with that? If not, then you may need to back up your VM (a copy of VM hence) and bring the backup up when original VM crashes.

Where can you keep your Azure VM backup?
  • On-premises
  • On Page Blob on Cloud which is triple replicated
  • Use Azure Backup Service which allows point in time restores and provide a UI to manage backups. As of April 2016, the backups are not cross region. So if all DC in a region are down, I would assume that your backup would be lost.


Let’s come back to our original question and view it along with need for Backup:

How to achieve maximum availability with minimum cost?
By using 2 VMs instead of 1 but pay only for 1.

How?

Add two VMs in one Availability Set. Then from Azure portal mark 1 VM as Shutdown (De-allocated). Microsoft does not charge you for De-allocated VM.

Benefits:
  • With two VMs, you get a SLA from Microsoft of 99.95% availability
  • You are saved from entire DataCenter crash due to above SLA
  • During scheduled AppFabric maintenance day, you can bring your second VM online for 100% availability on that day.
  • Fastest possible VM restore in case of a crash, simply bring the other VM online.
  • And you don’t need to worry about taking and paying for Azure Backup Service!!


How much do you need to pay extra anyways?
  • VHD of Shutdown VM still requires space on Page Blob, you need to pay for that. But Page Blob price is so much less that you will not care to spend that much. 
  • Let’s say if in a year Azure AppFabric maintenance happens thrice, then you have to pay 3 days of another VM cost which is the cost of keeping your service highly available.


So that’s it. I believe this is the cheapest solution with High Availability plus backup with Single VM. If you see issues with this Solution, please post your comments and I will update it.

Some more FAQs on using Single VM:

Q. When we shut down the second VM and bring it up, will the public IP change?
A. the Public IP will not change as you have two VMs in a cloud service.

Q. When we shut down – de-allocate the VM, will all software on it be lost and we need to re-install it again.
A. No. VHD still exists and everything is retained.

Q. In case of planned outage, will the VM be up in maximum X minutes? What is that X?
A. During the planned maintenance window, each virtual machine (VM) that is not in an availability set may experience a reboot. The Virtual Machine will have approximately 15 minutes of total downtime. Temporary disk and Azure storage disks will be preserved during the maintenance. Microsoft informs you of the maintenance in at least 1 or 2 weeks prior to the outage, and the maintenance will be executed within twelve (12) hours from the announced start time. Please be noted that VMs that belong to an availability set and Cloud Services web and worker role instances will not be impacted by this maintenance operation.


Sunday, May 26, 2013

TOGAF Overview

In this series of blogs, I will try to summarize TOGAF, The Open Group Architecture Framework. I have done my TOGAF9.1 certification in year 2012. I have been part of two teams which were in transitioning phase from a baseline architecture to the next generation architecture.

TOGAF is an Enterprise Architecture Framework which has evolved from best practices in EA development. The two teams I worked with were not following TOGAF, but several practices they followed overlaps with TOGAF. Based on that experience and the knowledge I have gathered via TOGAF certification I will be writing this blog. Your comments will help me refine it further. I will mainly refer TOGAF documentation while writing this blog.

You can adopt TOGAF as your enterprise architecture framework or you may wish to adopt parts of the TOGAF framework and use them with your existing EA framework. TOGAF works well in both these scenarios.

Before diving into TOGAF, let’s see what an Enterprise Architecture work involves. An Enterprise Architecture encompasses the entire enterprise which starts from the business. When you have Enterprise in your perspective, the entire thought on architecture changes. You do not begin from thinking about 2-3-n tiers, High Availability, Messaging, etc… but your perspective now is what are the business processes and how IT can help evolve or support these business processes. You begin with understanding the Business Architecture, what Architecture Principles will your applications be based on, what will be your Architecture Roadmap to support these Business processes, who your stakeholders will be and how will they be managed, what is the current architecture (if any) and are there any Gaps to support the business processes, how to store reusable Architecture Building Blocks, how to Migrate from existing architecture to the target state architecture, what all applications will be required and how will they be governed, how will a change in architecture be implemented… and the list goes on.

Implementing Enterprise Architecture is not a one person job and neither is it a single project within Organization. It’s a practice which must be setup within the organization and recognized and funded from the CXO level. Enterprise Architecture Practice is not a standalone ivory tower effort. It works in parallel with the PMO and the Operations.

So, will you then not be thinking of 2-3-n tiers, High Availability, Messaging, etc. That depends on what sort of Architect you are. If you are an Enterprise architect, you will think of these when you define architecture principles. If you are a Data Architect, you will define Data Aspects. As an application / solution architect you will identify tiers, messaging in accordance with Architecture Principles laid out and if you are a Technical architect, you will define high availability platforms. Roles and responsibilities of architects will vary in different organization.

I hope this gives a glimpse of what is involved in an Enterprise Level Architecture. TOGAF provides an architecture framework which defines a process following which you will be able to develop an Enterprise Architecture.

TOGAF is developed by The Open Group and consists of seven parts:

1.       Introduction – TOGAF Introduction and common Terminologies and definitions

2.       ADM – Architecture Development Method – An iterative methodology of how to develop an Enterprise Architecture. It has 9 Phases and each phase has Objectives, Steps, Inputs and Outputs. Eg. In Preliminary Phase, one of the objective is “Identify and scope the elements of the enterprise organizations affected by the Architecture Capability” and one of the input is “Board strategies, business plans, business strategy, IT Strategy, business principles, business goals, and business drivers” and an output is “Request for Architecture Work”.

3.       ADM Guidelines and Techniques – This describes several guidelines and techniques which can be applied when performing ADM. Eg. How to do a Gap Analysis?

4.       Architecture Content Framework – When developing Architecture via ADM, several outputs will be produced (Eg. Project Plans, Architecture Definition Document…). The Content framework provides a way to structure and present these outputs. All outputs of ADM is categorized as a Deliverable, Artifact or a reusable Building Block and the content framework provides definition of each of these. The content framework also provides a meta-model for to structure outputs from all phases. This provides consistency among different architectures.

5.       Enterprise Continuum and Tools – Within an organization, there will be a continuum of architectures from foundational to specific architectures. This part of TOGAF classifies the entire spectrum and also provides guidance on how to partition the architecture and how to maintain Architecture Repository.

6.       TOGAF Reference Models – TOGAF provides two reference architecture models. Technical Reference Model (TRM) and Integrated Information Infrastructure Reference Model (III-RM). TRM provides a foundation which includes services and functions required to build more specific architectures. It essentially is a taxonomy of components and structure which is generic to all architectures. III-RM is also a taxonomy and is applicable to organization which have silos and would need information to flow out and aggregated.

7.       Architecture Capability Framework – Provides guidance on how to setup Architecture capability in the organization in a Phased manner.

This blog provides an overview of what TOGAF has to offer. This introduces you to several terms without going into detail. I hope the blog still offers enough information to inspire you to give a look to TOGAF.
In the next blog, we will look into the different Architectures supported by TOGAF i.e. the Business, Data, Information and Technical Architectures.

Wednesday, April 24, 2013

Practical Scrum - Story Point Estimation

This blog is about how to come up with a release plan using story point estimation technique. I am writing this blog while I am working on sprint 4 of our project. We are following a two week sprint cycle and our project will be completed in Sprint 8. When I say sprint 8, it’s the commitment from my entire team and we have not estimated any of our tasks in hours.

I am the tech lead, developer and the scrum master on my team. I also write integration tests. Scrum master can be a dedicated role as well as shared one.

This blog assumes that you understand scrum, so will not explain what a story point is or what velocity is. In a line, Story Point is an abstract measure of size of user story and velocity = total story points burned in a Sprint.

This is how we began to estimate our Release date. We added all our stories to product backlog. We use Jira as scrum tool. The stories were then prioritized and based on priority, we selected 30 user stories for our first release. These story names were then written on sticky notes. Then we called for a release planning meeting.

This meeting happened four times with a duration of 1 to 1.5hrs each. Participants were the entire team which is three developers (including me), 1 tester and product owner. We used Fibonacci sequence as story point estimation technique. We gathered in a room, sat on a big oval table and created 9 virtual columns. Column headers were 1, 2, 3, 5, 8, 13, 20, 40, and 100.

Product owner picked a medium complexity story (complexity was decided by me as medium). Then he started explaining what that user story means, what all it involves and when will it be considered complete. All team members asked questions to get more clarification. Ultimately everyone’s query was answered. Team discussed that as part of this user story we will need some UI changes, a new method in business logic, some normal stored procs and roughly 2-3 automated integration tests. They trusted me that based on entire stack it is a medium complexity and placed it in column numbered 5.

Then we picked next story, product owner explained and team asked questions and thought among themselves that what will be required technically at a high level. They concluded that this story is less in complexity than previous one but was not very straightforward too. As it was low in complexity so it was kept at column numbered 3 to the left of column numbered 5.

Next story went to column numbered 13 as it was more complex than the one we placed in column 5. Next went to column numbered 1 as it was the easiest one. Then the next story was thought of between 13 and 20, but as we did not have any other column, I asked team to put it in 20.

The stories which went to 40 or 100 was discarded by me. I asked product owner to break these further before next meeting so that each short story is 20 or less. There were few 13 and 20 numbered stories which logically could be split to shorter independent ones. This was done instantaneously and the new stories then moved to 5 or 8 numbered columns.

Before you get bored, let me say that this went on and after 4 meetings, all release 1 stories were in one of following columns 1, 2, 3, 5, 8, 13, 20. During these discussions, some moved to backlog and others were picked from backlog by product owner as he got more understanding from team inputs.

Team again trusted me and kept iteration length as 2 weeks. We started coding sprint 1. We did a sprint planning meeting. How we did it will be covered in another blog. At end of sprint 1, we checked how many stories we completed. We completed two from column numbered 5, one from column numbered 13. Total sum of story points completed in sprint 1 = 2*5 +13= 23.

In sprint 2, we completed 20 and in sprint 3, we did 22 story points.

From this we deducted that we are doing an average of Math.Floor((20+22+23)/3) = 21 story points in one sprint. We have total 160 story points estimated in our Release 1. So with 21 as average, we should be done in sprint 8. Each sprint being 2 weeks plus schedule and feature buffer, we now have a release date with us.

This is just one of several methods of estimating a scrum project. Others are via historical results, or via forecasting by deducing hours as documented in Mike Cohn book Agile Estimating and Planning chapter 16.

In this blog, intentionally to keep it simple, I did not explain sprint planning meeting and also did not talk about certain things like project schedule buffers, backlog grooming etc..

I will extend on few of these points in part 2 of this blog.

Tuesday, December 25, 2012

Practical Scrum - Scrum of Scrums


Scrum works best when the team size is small and co-located. In my previous blog, I talk about a co-located scrum team structure and how common IT roles maps to the Scrum team.
There will be times when the product will be too large for small scrum teams to work on. This is where you break your product functionally to smaller streams and have different coordinating teams working on it. Scrum of Scrum (SOS) is multiple scrum teams working and coordinating on the same product or towards the same goal. The word ‘same’ is key here as if the products are different you need different scrum teams altogether and no coordination is required.

You can also use SOS if the product team is at several locations and each location has fully functional Scrum teams. This makes product execution across multiple locations manageable.

Guidelines for Product work distribution among Scrum teams 

Here are the guidelines that can help you structure the work:
  • No two teams should work on same user story.
  • Strive to allot user stories by features. So, if in Iteration 1, a team was working on user stories of Feature 1, try and assign this feature to the same team in next iterations also. With multiple scrum teams you need to do some more planning. Here, you are just assigning features, the individual tasks are still up to the Scrum team to work on.
  • For ease of planning, Estimation scale has to be same for each user story. To achieve uniformity in scale, it’s recommended to have few initial user stories estimated with all teams or representatives of all teams.
  • Release planning will be done as is done in normal scrum where prioritized user stories will be divided to Iterations considering combined velocity of all Scrum teams.
  • Iteration planning will involve dividing user stories among Scrum teams in proportion of their velocity. Note that velocity will be different for each team.
  • Iteration length for each team should be same. This eases planning and coordination.
  • For dependencies, coordination is of utmost importance and Scrum of Scrum meetings is a good mechanism to share progress.

SOS Meetings for Teams Coordination 
SOS teams coordinates via a meeting in which one representative from each team participates. The meeting time is 15 to 30 min. and frequency of this meeting can be daily (if more coordination is required due to dependency) or sporadic with minimum once in a week.

In my view, Scrum of Scrums usually should be conducted during following occasions:
  • Just after daily stand-up meetings to update other SOS participants about the progress made between last meeting and now and what is planned to achieve before next meeting. Any dependencies among teams is also discussed. Any impediments are brought forward, but resolution is discussed outside this meeting.
  • Just after Release Planning meeting: With an aim to identify any dependencies on user stories that will be accomplished by coordinating scrum teams.
  • Just after the Sprint Planning meeting: To make all teams aware of their Sprint goals and understand deliverables and dependencies.

The Team Structure 
In a previous blog I mentioned the team structure of a co-located scrum team. On top of the description of roles mentioned in that blog, here are some additional points to consider:

Product Owner Role: The scrum of scrums should share the same Product Owner(s) or the Product Owner of each team should coordinate to represent one face to all the teams. They maintain the same Product backlog so that you always deliver the top most priority feature in a coordinated way.
Scrum master Role: Each team has their own Scrum master. The scrum master participates in all regular meetings and also participate in the SOS coordination meeting. However, it’s not required that only Scrum master participates in SOS meetings, it can be anyone from the team.
Developer and Tester Roles: These roles are not shared among the teams. Each team has their own dedicated Developers and Testers.
Specialized Roles (UI Designer / DB Specialist / Technical Writer / Business Analyst etc.): You might consider sharing them. Dedicated resources are always the best as resource sharing causes thrashing and context switching is a costly affair.

A Lean Alternative - Product Coordination Team (PCT)

Lean methodology suggests an alternate to SOS team. The PCT team has representatives from each team just as in SOS. Lean says that SOS tends to be biased towards their own teams and do not think of larger picture of Product Coordination. PCT picks the work from Product Backlog and in an unbiased way places it in each teams Sprint backlog and coordinate the delivery. It’s essentially SOS team without affinity to their own teams. PCT by its name brings a different perspective. A SOS team should work on this perspective for the greater good of product development.