Search

Sunday 15 November 2015

How Should a Company Decide What Projects To Do

This could be achieved simply from having discussions between fellow employees, but as more people become involved reaching consensus is often difficult.

Or to a reach a consensus through analysis of ROI. However to derive accurate ROI you will need to produce accurate estimates of work to deliver a project and derive expected monetary benefits from projects, both of which will include assumptions. Such assumptions need to be assessed for their reliability and are themselves difficult to reach a consensus opinion.

Often companies attempt to run with the former approach because it is easy to implement, with varying degrees of success. In the early years of a company the leadership of the company can often have enough knowledge of every aspect of the company to make informed decisions of which projects to warrant. However as the company grows this becomes increasingly difficult and so keeping with same model brings less success. Companies eventually conclude that they must embrace the latter approach of a more scientific analysis of ROI etc.

However the difficulty to transition to such a model is huge.
A company must first estimate ROI, many factors can influence this making it difficult to estimate accurately. Such influencers include, how much will a company make from a new product when released and this can be influenced in turn by many factors such as how well a product is received by customers.

Once an ROI is estimated, the team needs to also estimate how much work is required to delivery the product. Companies and people vary on how accurate they are at estimating this, often caused by many influences such as employee turnover, or the business changing direction etc.

Lets say a company can fairly accurately estimate both ROI and the required work effort for ALL proposed projects. The business also needs to decide how it should allocate money to individual areas of the company. Lets say this is achieved as well, we now need to decide what work to do. Should this be solely based on the difference between the cost and ROI as a ratio. Well if 1 project will cost such a huge percentage of the overall budget many parts of the company will be neglected which will be detrimental to those parts of the company and good people may leave the company resulting in a long term sharper decline in ROI. Also if reputation building projects are declined in father of other higher ROI projects in the long term this can also have a huge negative affect, as the retention rate of customers drops. Then there are projects that can lead to the loss of important accreditations such as ISO, or projects which avoid the company receiving fines, or projects that are enablers for future expansion but provide little or no ROI now, and projects that reduces risk for the company but produces no ROI such as producing data backups of systems.

As you can see even if a company moves to a more ROI decision based model a company still needs to make many cognitive decisions that are not based on scientific analysis of numbers, and in actual fact the number of overall decisions can escalate.

Can a company even remove these decisions from the process. It is certainly possible if you decide how much of a percentage of the overall budget a project can consume, use some mathematical hypotheses testing of the assumptions, decide if you want to spread projects throughout the company rather than concentrating the budget on a select few projects; if you spread budgets throughout the company the approach of doing this affectively must be decided, either based on department size or department importance to the company or a mixture.

Dividing projects into logical groups can allow the company to select a diverse set of projects to provide benefit in various ways and not simply to solely focus on ROI, such as:

1. Positive ROI projects
2. Reputation building projects
3. Accreditation projects
4. Employee well being projects
5. Business risk reducing projects
6. Future positioning projects

As a company you can decide on percentage weightings of importance of the above categories.

Then you also need to decide on the spread of budgets to the departments, broadly speaking split into the following categories, but this will vary per company:

1. Finance
2. IT
3. Sales and Marketing
4. HR
5. Property
6. Legal
7. Customer Service
8. Media

Each of the above are usually sub-divided many times

Now you can start deciding what budgets can be provided to teams, making sure that the projects decided produce the even distribution of budget for the 6 type of projects outlined above.

Once you have run a year long or even longer sequence of releases analyse if the ROI produced is as expected and then further refine your model based on this input.

Since following the ROI approach requires a huge amount of analysis it requires a lot of resource to effectively derive to reliable decision making, consequently the cost of which can often prohibit smaller companies adopting this approach. And a company must decide during its natural evolution what is the appropriate stage when it should transgress to an ROI approach.

Also since the ROI approach is vastly more complex than the approach of simply trusting in the leadership to make the right decisions, and unless every aspect outlined above is scrutinised and analysed to a minute detail, for all proposed projects, the effectiveness of this decision approach is undermined. In summary if a company follows the ROI approach it must do it very well to make it effective.

The drawback of following this model of giving autonomy to departments to decide where and how to spend their budget is employees rarely consider that they will remain with their company in 5 or 10 years, so following this model employees will naturally have a more short term viewpoint when deciding where and how to spend their departments' budget.

There is however another model that a company can follow which takes a more democratic approach, taking everyone's voting of vision cards ( basic blueprints of ideas ). Only issues with this model is that very good ideas may not get supported and can become more of an employee popularity contest than an idea contest. Secondly, employees may simply not understand all vision cards and the benefits because a finance executive will have little understanding of the issues faced in IT and so how can that person vote on ideas presented by IT; of course you can limit what employees can vote on; but this then really drifts towards the 2nd model rather than this model. Thirdly, this model doesn't address employees having a more short term view, and only the 1st model addresses this.

What I am demonstrating here is that the simple decision of what should a company do is never simple, but actually is the most important decisions that a company makes, and inherently the approach it takes to make these decisions is crucial to the success of the company. And importantly the approach should be continually assessed and improved per year.



Sunday 25 October 2015

A Useful Winter 16 Function

Not many people will notice a small function in the winter 16 release which has potential to help the performance considerable of the entire platform, if we all use this function wisely.

System.SObject Class

recalculateFormulas()
Recalculates all formula fields on an sObject, and sets updated field values. Rather than inserting or updating objects each time you want to test changes to your formula logic, call this method and inspect your new field values. Then make further logic
changes as needed.

For example :

You want to insert an Account in a testmethod and you are wanting to test that your formulas will be calculated correctly. Previously you would have to perform a DML. And we all know how expensive DMLs are for the platform. This little formula bypasses the need to do the DML.

Say your Account is quite basic and has several formula fields.



Account acc = new Account(Name='Steves Test');

//Now test a formula field StevesFormula__c to have the "This is a test" as the value without doing a DML

acc. recalculateFormulas();

system.assert(acc.StevesFormula__c == ' This is a test');




Saturday 24 October 2015

The New World Of Debugging



I cannot begin to describe how Im feeling. Im just so excited. Have you seen the new debugging capabilities in Eclipse and the Developer Console. If you havent stop what you are doing now. If you are drinking a nice bottle of Moet, or you are digging into some nice chocolate cake. Stop! Open up Salesforce and have a look.

But is this exciting, is this thrilling, well for some it isnt, but for me god damn it is. Why?

With these tools you will be able to develop faster and so release faster and so satisfy your stakeholders and keep them happy.


You can now do the following:

1.    You can run individual test methods in a test class

           
            You can now select individual test methods from your test classes to
include in a run. You can also choose whether to run tests synchronously, and you can rerun only the failed tests


Oh I was 1 of the people suggesting this many years ago on IdeasExchange

2.    If you have hit debugging levels regardless of what logging level you set, you can now start your debugging at a specific point in your code to prevent this


Trace flags now include a customizable duration. You can also reuse debug levels across trace flags and control which debug logs to generate more easily than ever before. This feature is available in both Lightning Experience and Salesforce Classic. A debug level is a set of log levels for debug log categories: Database, Workflow, Validation, and so on. A trace flag includes a debug level, a start time, an end time, and a log type. The log types are DEVELOPER_LOG, USER_DEBUG, and CLASS_TRACING. When you open the Developer Console, it sets a DEVELOPER_LOG trace flag to log your activities. USER_DEBUG trace flags cause logging of an individual user’s activities. CLASS_TRACING trace flags override logging levels for Apex classes and triggers, but don’t generate logs.

Debug > Change Log Levels

3.    Of course there are other features you should check out. Such as all the Analysis features, go to 

Debug > Switch Perspective > Analysis


You can check any limits that you may be approaching.
You can check how long it takes to run certain functions and what actions occur when during execution.
You can see the order of execution in a tree diagram and other various ways
You can trace variables as they change in your code

4.    Eclipse debugging


Use the Apex Debugger to complete the following actions.

• Set breakpoints in Apex classes and triggers.
• View variables, including sObject types, collections, and Apex System types. • View the call stack, including triggers activated by Apex Data Manipulation Language (DML), method-to-method calls, and variables.
• Interact with global classes, exceptions, and triggers from your installed managed packages. When you inspect objects that have managed types that aren’t visible to you, only global variables are displayed in the variable inspection pane.
 • Complete standard debugging actions, including step into, over, and out, and run to breakpoint.
• Output your results to the Console window.






Saturday 3 October 2015

The Importance Of Estimating Requirements

The Importance Of Estimating Requirements 

I havent been blogging for a while mainly because Ive been doing some DIY work in my house, so although my blogging and my readers have suffered my kitchen is looking much better .
In this blog Id like to talk about Estimation, something developers dont like much.
Estimating requirements and estimating accurately is more important than most developers think it is. Most think it is just another administration task that stops them developing, but without it companies struggle to operate correctly.
There are different types of estimating such as using story points http://scrummethodology.com/scrum-effort-estimation-and-story-points/. Or using estimating by time.
Personally I suggest it doesnt really matter which method you chose to estimate stories. Remember a story at this stage has the basic outline of the work and not the detail, so the estimate is a very approximate one.

But if I were to chose a method I would chose estimate by time. The reasons are, time is a universally known gauge and doesnt need to be calibrated; when new members enter your team with story points they need to be taught what your story base point is, whereas with time you dont; if you have more than 1 team in your company each team may have a different story base point and so if you move staff around teams this can be confusing for the team members and lead to inaccuracies. Another benefit of using time is that this can be used to calculate forecasted budgets much easier, whereas if you use story points you first need to translate this into its equivalent time then to work out the forecasted budgets. Of course you could argue if you are working on a set sprint length of say 2 weeks and you can complete 5 story points per person in that 2 weeks then this is the only translation into time that you need.
As it comes closer to the project development start date more finer detail of the requirements are gathered and the stories are broken down into small individual tasks.
Some teams believe they only need to refine the story points they gave at the beginning and then calculate how many stories they can fit into a sprint, based on the priority of the stories.
I agree on the overall concept of this but I believe the individual tasks should be sized themselves. The only issue here if you use story points you can a scenario where you have 0.1 story points and so this undermines the value of using story points on Tasks of the Stories.

Many teams dont bother entering their actual time spent on Tasks or Stories. Is it really required if you say you are going to deliver 15 Story points in a 2 week sprint and that is exactly what you do deliver, does it really matter if you log your actual time. Well I would argue it does.
Say for example you have 2 Stories and say you use time to size Stories, if you estimate that both Story 1 and 2 will take 1 week each to complete, but in reality Story 1 took just 2 days to complete but Story 2 took 8 days. Both Stories were still completed exactly on time that was estimated, but actually in reality the team is very bad at estimating and this should be improved.
In the next sprint the team could get it very wrong and grossly under-estimate both Stories and only deliver 1 of them.
The trade-off however is the extra administration time required to enter actual time worked.
So on balance I would suggest use time to estimate both Stories and Tasks. Start with entering Actual time until you prove the accuracy of your estimating at both the Story and Task level. Once you prove a consistently high % accuracy level across all team members you can remove the extra administration required to log actual time. Of course if new your team changes considerably you may need to restart the actual time logging for a period.

Sunday 16 August 2015

A Generic Recursive Runtime Decision Making Batch Class


Previously you could only execute 5 batches jobs from any single context
But one of my ideas on IdeasExchange was included in a recent Salesforce Release now you execute up to 100 batches which become queued in the AsyncApexJob object

What I'd like to cover in this blog, a generic recursive runtime decision making batch class.
We will make a class that requires little change and can serve and batch processing for any batch.
There are situations whereby once a batch has fully executed you want to initiate another batch:

1.      The first batch executes as many operations as it can and then initiates a decision process that either executes the same batch process again or ends the executions

            Situations where this scenario can be used:
a.       A callout to a 3rd party system and you dont know how many records exist in the 3rd party system


2.      After the first batch executes, records are set into a state that now allows a different batch to execute. Of course the second batch could be scheduled for a certain time but there is no way of knowing when the 1st batch will complete and so you have to space batches apart. If it is important to complete the operations in a timely fashion you will want to execute the 2nd batch immediately when the 1st batch completes

            Situations where this scenario can be used:
a.       The 1st batch updates a field on the Account which fires a trigger and workflows. This sets conditions on say the Contact object by updating various fields. The 2nd batch now picks up records on the Contact where this field has been updated. So we need the 1st batch to complete for the 2nd to process.


Lets consider a situation where a batch makes a call to a 3rd party system requesting for a number of records, but due to payload limitations the 3rd party can only return a certain number of records and the 3rd party doesn't provide a means of identifying how many records there are in the 3rd party because such a call drains system resources on the 3rd party.
So we need to setup a batch class that makes a call to the 3rd party and retrieves X number of records. When the batch falls into the Finish() we call a decision method which identifies how many records were processed which tells us if we have processed the last batch or not.






public with sharing class Constants {
            public static final string CONST_DOWNLOAD = 'DOWNLOAD 3rd PARTY';
}






global class batchProcess implements Database.Batchable<sObject>, Database.Stateful, Database.AllowsCallouts{
            global integer mx;                               //number of records to process
            global String batchType;                     //identifies which batch processing to call
            global String soql;                               //the soql query if the batch is to make a query to feed records into the Execute()
            global Map<String,String> vars;         //this holds any arguments that are to be passed to the batch function in the Execute()
            global Boolean success;                      //determines if the last batch execution was successful, if it wasnt we might decide to stop any further batch processing since there is a possibly a fault has been encountered

            global batchDownloadCurrentGlobals(String thisbatchType){
                        batchType = thisbatchType;
            }
           
            global batchProcessing (String thisbatchType, Map<String,String> thisvars, String thissoql){
                        batchType = thisbatchType;
                        vars = thisvars;
                        soql = thissoql;
            }
           
            global Database.QueryLocator start(Database.BatchableContext bc) {
                        if (soql == null || soql == '')
                                    return Database.getQueryLocator('Select id From User limit 1');                        else
                                    return Database.getQueryLocator(soql);
            }
           
            global void execute(Database.BatchableContext BC, List<sObject> glbs){
                                    if (batchType == Constants.CONST_DOWNLOAD){//identifies the batch type we are calling, for a different batch you simply introduce another if statement                           
                                                if (vars.containskey('Max') && vars.get('Max') != '0'){
                                                            List<ApexClass> apxCls = (List<ApexClass>)glbs;
                                                            String maxCls = vars.get('Max');
                                                            mx = integer.valueof(maxCls)-1;       

                                    //call method that retrieves "mx" number of records from the 3rd party, if the callout can be made and is successful this returns true to "success". You could introduce a for loop here to make the callout a maximum of 10 times to reduce on the number of batch operations
                                    success = Utils.retrieveData(mx);
                                    }
                        }
            }

            global void finish(Database.BatchableContext BC){
                        if (batchType == Constants.CONST_DOWNLOAD ){
                                    if (success)//identifies the last callout was successul
                                                Utils.decideToRunAgain(mx);
                                    else
                                                //do something when last batch didnt process and encountered an issue
                        }
            }





This is the decision method:


            public static void decideToRunAgain (integer mx){
                        //This custom setting is set in retrieveData() for the number of records retrieved from the 3rd party in the last callout made in the batch Execute() if this number is less than "mx" the last callout was the last callout required
                        Configurations__c latestCall = Configurations__c.getinstance(Constants.CONST_MAX);
        integer newlatestCallInt = (latestCall != null) ? integer.valueof(latestCall.Value__c) : 0;

                        if (newlatestCallInt == mx){
                                    //the last callout retrieved the same number of records as was requested so this cannot be the last callout to make so a new batch can be created
                                   
                                    //we also need to check that the number of queued batches is less than 100 otherwise the maximum in the queue has been reached
                                    //unfortunately we cannot halt execution for a time or even continually check AsyncApexJob in a for loop waiting for the queue to drop because that would hit governor limits
//Note:JobType ='Batch Apex' identifies a batch being processed, the JobType ='Batch Apex Worker' identifies the latest record being processed in the batch and so is constantly changing
                                    if ([Select id From AsyncApexJob where JobType ='Batch Apex' and Status = 'Holding'].size() < 100){
                                                batchProcessing batch = new batchProcessing (Constants.CONST_DOWNLOAD, <<specify the other parameters>>);
                    Database.executeBatch( batch, 1 );
                                    }
        }             
            }




           

The are various different themes you can employ on this concept, such as all the logic could be pulled completely outside of the batch into separate classes, keeping the batch class lightweight and actually never needs to change

Further information
http://releasenotes.docs.salesforce.com/en-us/spring15/release-notes/rn_apex_flex_queue_ga.htm?edition=&impact=



Saturday 18 July 2015

Object Definition Capture

I launched my 2nd app last week, which is a free app; so Id like to take the opportunity in this newsletter to do a bit of self promotion. Hopefully my readers will like this free app.

I don't want to send you to sleep but I'd like to tell you a short story.

In every company Ive worked in so far Product Owners, Business Analysts etc often have different styles of capturing requirements and recording them. Unfortunately often is the case that  requirements are not captured accurately or completely. The net affect is that the developer or administrator does not produce what is expected and the tester is not sure of what to test. The developers and administrators have to have additional multiple meetings until the full requirements are captured completely and recorded, which results in a lot of wasted time.
This frequently occurs when capturing requirements for objects and fields, but this should be the easiest of all types of requirements to capture because the number of different permutations that could be captured in the requirements is actually finite, unlike that of a code related project.

So I decided to create an app that would solve 4 main issues when capturing requirements for Objects and Fields.


                                    Capture object and field requirements in a consistent way for all Product Owners, Business Analysts
                                    
                                   Save time for Product Owners and Business Analysts when capturing requirements
                                    
                                   Developers and testers are given a consistent approach and so requirements are easily understood

                                  Product Owners and Business Analysts accurately capture all relevant requirements


15 companies have already installed my app in just 4 days since it was launched.


You need to add your org to Remote Site Settings, so if your org is say https://na12.salesforce.com, you would add this URL to Remote Site Settings. The bold part is the part which will be different for your Salesforce org.

Another benefit that the app could provide for you is that it will find all the Report Types and Page Layouts that you have in your system and enter them into 2 Custom Objects. You may be able to use this data for other purposes.

I hope you find the app beneficial.


Structuring Salesforce Invocable Methods

Invocable Methods are an important addition to the platform and Ive been calling for a complete overhaul of the Salesforce workflow engine for a long time and finally Process Builder was introduced a couple of releases back. 

There a few issues and limitations of Invocable Methods however:

  1. Passing Sobject lists appears not to work and instead you have to pass Ids of the Sobjects and then perform a SOQL to get the Sobjects, so you have to be mindful of SOQL limits. I would expect Salesforce to fix this issue in future releases, so this might go away.


  1. Another limitation is that you can only have 1 Invocable Method per class, which means you can end up having many classes. There is a neat way to get around this however:

Lets consider 2 examples. One Invocable Method sends an email introducers customers to The Self Evolving Software app, the 2nd Invocable Method updates a field on the Account to make post codes correctly formatted. 


@InvocableMethod(label='Setup_Salesforce_To_Salesforce_Email' description='Sends a Email To Explain Setup Of Salesforce To Salesforce')
public static void sendBenefitsEmail(ID[] emls){
      Attachment[] att = [Select id,Body, Name From Attachment
      where name='SES Introduction.pdf'];
       
      UtilEmail.sendEmail(emls,'How To Use The Self Evolving Software App', att);        
}




 @InvocableMethod(label='Format postcode on Account' description='Format postcode on Account')
public static void formatPostcode(ID[] accs){
      Account[] acc = [Select BillingPostalCode From Account
      where Id In :accs];
      for (Account eachacc : acc){
            eachacc.BillingPostalCode =
            UtilFormats.postcodeFormatting(eachacc.BillingPostalCode);
      }

      update acc;
}








The issue here is that we would have to create 2 classes with 1 Invocable Method in each.

The solution is to combine all Invocable Methods into an InvocableMethodHandler class and call to separate classes where the actions will take place




We first need to create a wrapper class to hold our data

In the labels for each InvocableVariable you probably want to display something a bit shorter as this will appear in the process builder.

 public with sharing class KeyValueInv {
      @InvocableVariable(label='Optional key if you want to represent a collection' required= false)
      public String key;
      @InvocableVariable(label='Stores the value' required= true)
      public String value;
      @InvocableVariable(label='The data type of the value' required=true)
      public String fieldtype;
      @InvocableVariable(label='This identifies the function to call' required=true)
      public String type;
     
      public KeyValueInv(String thiskey, String thisvalue, String thisfieldtype, String thistype){
            this.key = thiskey;
            this.value = thisvalue;
            this.fieldtype = thisfieldtype;
            this.type = thistype.toUpperCase();
      }


}

Below is the only invocable class and method you will ever need to create as we can send to it any type of data to the KeyValueInv class, plus in the InvocableMethodHandler we identify the function to call

public with sharing class InvocableMethodHandler {
      @InvocableMethod(label='' description='')
      public static void processType(KeyValueInv[] kvLst){
            if (kvLst[0].type == 'Send Benefits Email'){
                  UtilClass.sendBenefitsEmail(kvLst);
            }
            else if (kvLst[0].type == 'Format Postcode'){
                  UtilClass.formatPostcode(kvLst);
            }
           
      }

}



 public static void sendBenefitsEmail(kvLst[] kvs){
    //code .....
 }


public static void formatPostcode(kvLst[] kvs){
    //code .....
}





Now when you create your process builder you will be able to set the 'Send Benefits Email' as or 'Format Postcode' and this will decide which function to ultimately to call.

Saturday 13 June 2015

Intro to Unit Test Data Creation Framework continued…….



In my last blog http://stevefouracre.blogspot.co.uk/2015/06/intro-to-unit-test-data-creation.html I gave examples of how the framework can be used. I also introduced rapidProcessing. Now we will expand the framework making use of rapidProcessing.

Reminder:
            rapidProcessing allows you to bypass the code in triggers, allowing the test            data to be created much faster ( this can also be useful when you are migrating          data into Salesforce ). In both of these situations you are telling Salesforce         exactly what data to create and you don't want the system to manipulate the          data further or to perform any actions within the triggers that may do all kind    of things such as creating additional business process data like Tasks, Events           and Cases etc, or sending out emails to customers etc ( however the latter wont             happen in unit tests as emails are not sent from unit tests ).

Lets take our previous example which will bypass both the Account and Contact triggers:


KeyValue[] kvsA = new KeyValue[]{};
KeyValue[] kvsC = new KeyValue[]{};

Map<System.Type, KeyValueBulk> keyMap = new Map<System.Type, KeyValueBulk>();
keyMap.put(Account.class, new KeyValueBulk(1, kvsA));
keyMap.put(Contact.class, new KeyValueBulk(5, kvsC));

TriggerController.rapidProcessing = new Map<System.Type, Boolean>{ Account.class => true, Contact.class => true};

//now rapidProcessing has been turned on for both objects the code in the triggers will be bypassed. You will need to build into your triggers the Trigger Control Framework:

TestDataComplexData dataCl = new TestDataComplexData ();
dataCl.insertAccountAndContacts(keyMap);



Ok for the above example for triggers to be bypassed we need to first create 2 Hierarchical custom settings:

Triggers_Off__c
            This custom settings needs to have the following fields:
           
Field Name
Data Type
value
Boolean


Trigger_Per_Object__c
            This custom settings needs to have the following fields:

Field Name
Data Type
Account
Boolean
Contact
Boolean

            For each additional trigger you create you will need to create an additional             field in this custom setting for that trigger and you will need to add a new else if {} statement to the globalTriggerControlSetting() function in the       TriggerController class. An example of this will be shown next for the        Account and Contact

The 2 custom settings above can be used to bypass the triggers typically when either you are making a deployment to Production or if you are performing a data migration. They will allow you to disable individual or all triggers per person, per Profile or the entire system.


In the TriggerController class we created a number of variables to bypass the Account trigger, now create a similar set of variables for the Contact trigger. We also need to add 2 functions into the class globalTriggerControlSetting() and globalTriggerPerObjectControlSetting():



            //Contact - Only for testing to check if the code ran or not
            public static boolean Contact_Update_Succeeded = false;
            public static boolean Contact_Insert_Succeeded = false;
            public static boolean Contact_Delete_Succeeded = false;
            public static boolean Contact_UnDelete_Succeeded = false;

            //Contact - Disable / Enable parts of trigger
            public static boolean Contact_DisableAllTypes = false;
            public static boolean Contact_DisableInsert = false;
            public static boolean Contact_DisableUpdate = false;
            public static boolean Contact_DisableDelete = false;
            public static boolean Contact_DisableUnDelete = false;

public static boolean globalTriggerControlSetting(){
            return (((Triggers_Off__c.getOrgDefaults() != null) ? Triggers_Off__c.getOrgDefaults().value__c : false) || Triggers_Off__c.getInstance(UserInfo.getUserId()).value__c  || Triggers_Off__c.getInstance(UserInfo.getProfileId()).value__c) ;
}

public static boolean globalTriggerPerObjectControlSetting(String obj){
Trigger_Per_Object__c.getInstance(UserInfo.getProfileId()));
            if (obj == 'Account__c') return (((Trigger_Per_Object__c.getOrgDefaults() != null) ? (boolean)Trigger_Per_Object__c.getOrgDefaults().Account__c  : false) || (boolean)Trigger_Per_Object__c.getInstance(UserInfo.getUserId()).Account__c || (boolean)Trigger_Per_Object__c.getInstance(UserInfo.getProfileId()).Account__c) ;
            else if (obj == 'Contact__c') return (((Trigger_Per_Object__c.getOrgDefaults() != null) ? (boolean)Trigger_Per_Object__c.getOrgDefaults().Contact__c  : false) || (boolean)Trigger_Per_Object__c.getInstance(UserInfo.getUserId()).Contact__c || (boolean)Trigger_Per_Object__c.getInstance(UserInfo.getProfileId()).Contact__c) ;
            else return false;
}



The Disable variables eg: Contact_DisableAllTypes allows you to disable all of a trigger or individual parts of a trigger, commonly will be used within the body of the code including unit tests. We could have used the custom settings but that would involve using DMLs to turn the triggers on and off.



In the next blog we will create the code for the trigger.