Friday, June 20, 2014

Introduction to APEX

Introduction


A couple of weeks back I wrote an article: Introduction to Visualforce. I also promised to write an article introducing Apex. Here I am keeping my promise.
I believe anyone reading my blog, will already be familiar with Visualforce, Apex etc. But a blog with 101 added to its name should really start from the very basics, because beginners and curious individuals will expect introductory articles.

So in this post we will look at the following Apex introductory Concepts;

  • Definition
  • Apex and Big Bro (Java)
  • Auditing Apex Code
  • Software development on the Force.com Platform
  • Common Limits
  • Apex Code Framework - Classes, Triggers, Execution Anonymous
  • How to invoke Apex
  • When to use Apex
  • Basic language constructs
  • Dynamic Apex
  • Testing and Deployment


This is not all that Apex Code has to offer, but I think this should be enough to get you going and when you are more confident, I encourage you to take a look at more complex topics and read more of my post on different topics. In this post, I will try to give you the big picture while laying emphasis on what is important and what to you need to know. A few topics like Web Services, Batch Apex and Email Services will no be discussed in detail. Some complex topics like dynamic Apex will be discussed in detail, because I believe it is very important and should be used from day one if you want to really understand and embrace the Apex programming mind set. Hope you enjoy the read.

Definition


According to the folks at Salesforce;

"Apex is a Java-like, multitenant, scalable, secure, proven and trusted proprietary programming language running natively on Salesforce servers and strongly-typed to Salesforce metadata".

What "Java-like" means exactly will be explained in the next section, Apex and Big Bro (Java). For now we will look closely at a few important characteristics of Apex, some of which are mentioned in the definition above;

Multitenancy


Just like many tenants living in an apartment complex share the same underlying infrastructure and resources such as electricity, plumbing etc, companies and individuals (tenants) using the Salesforce.com cloud share resources in a similar way. In the Salesforc multitenant environment, Data and Code from one Org can neither be seen nor accessed by or from another org. This is made possible by the Apex Virtual Machine (VM) which is a layer of abstraction that monitors shared code execution in the Salesforce cloud. It provides the flexibility, safety and control required for multitenancy.

Metadata Awareness 


As mentioned in the definition above, Apex is strongly typed to the Salesforce Metadata. As you all probably know, an organization's data model can be customized through the declarative setup by creating sObjects and fields which are stored as Metadata. Metadata awareness simply means that Apex code is aware of these sObjects and fields and therefore sObject or fields referenced in Apex code cannot be deleted.

On Demand


Apex is the world's first "on demand" language. This means Apex runs entirely on Force.com servers without requiring any local servers or software. Because of this, Apex code developers can focus on innovation rather than infrastructure because performance, security, scalability, compatibility and maintenance are the responsibility of Salesforce.com not theirs. 

Scalability


The fact that Apex code runs in a multi.tenant environment with logical partitions that separate code from one Org from that on all other Orgs and also the fact that Apex code is "on demand" makes it highly scalable. Applications developed using Apex code, can be scaled indefinitely to support additional users, without having to deploy additional servers

Proprietary


Code written using traditional programming languages such as Java, C++ etc are fully flexible and can tell the system to do almost anything. Apex on the other hand is not a general purpose language but instead a proprietary language used to implement specific business logic. Apex is governed and can only do what the system allows it to do. There are two very important aspects to this. 
First, due to multi-tenancy on the Salesforce.com Platform, governance of Apex code is absolutely necessary to prevent a single Org from consuming or monopolizing all resources. This could happen very easily if a developer makes a mistakes and builds a recursive loop in a function somewhere which calls itself indefinitely. 
The second aspect is more of a disadvantages. If you begin using proprietary Apex code, you may be "locking yourself in" with Salesforce.com, meaning the possibility of later switching to other vendors might just be too high and too expensive. This is a discussion that has been around from the time when Salesforce.com introduced Apex Code. Read this article if you care for more insights into this kind of discussion; "salesforce.com's Apex language is not the lock-in attempt it appears to be", this one too; "Force.com sheds the proprietary tag…almost" . Read the comments too ... that's where the real discussion takes place. 

Others even argue that the efficiency of having a stable platform with little performance and stability issues is wasted in trying to code around the limits. My take on this is that I partly agree. Partly only because right now, there are Force.com implementations out there with thousands of users which shows that despite the current governor limits, if an application that uses Apex Code is designed properly it can be used to build highly scalable and very performant applications. I am sure as Salesforce.com and the Force.com Platform continue to grow, we will see some of these limits extended more and more. But understanding the basic concepts of the Force.com platform and using appropriate design patterns will not only make you an excellent Force.com developer, it will make you a better developer in general. I will recommend Dan Appleman's book "Advanced Apex Programming" for code design patterns geared at overcoming these governor limits. I have both editions lying on my desk and I have found the book to be very helpful. It is not for complete beginners though. But do go for it after you have gotten a little Apex coding under your belt.

To conclude this section on definition, I would like to mention two characteristics of Apex code which do not fit into any of the sections above;

  • Apex is not technically automatically upgraded with each new release but has to be saved with a specific API version. It however supports backward compatibility with code from previous versions.
  • The Force.com platform has a built-in framework for testing and deploying Apex code. We will talk in more detail about the importance of testing and the deployment possibilities in a later section of this article. But note that to ensure that most of the code has been tested, the platform requires a 75% code coverage over all the Apex code in your org before it can be deployed to production.


Apex and Big Bro


Apex is Java-like but it is NOT Java. 

Apex in similar to Java in the following ways;

  • Both are object oriented
  • Syntax and notation are very similar
  • Both are compiled, strongly typed and transactional 


Apex differs from Java in the following ways;

  • Apex is not a general purpose language like Java. It is proprietary and governed such that with Apex we can only do what the platform allows us to do
  • Apex runs in a multitenant environment
  • Apex is on-demand and is compiled and executed natively in the cloud on Salesforce.com servers
  • Apex is case-insensitive
  • Apex requires 75% unit testing for deployment into production environment

Differences that relate to the way classes are constructed and behave will be discussed in detail in the section Apex Code Framework - Classes, Triggers, Execution Anonymous.

One of the main differences between Java and Apex revolves around "Static Variables" and the "Execution Context". You don't know what this is? Don't worry about it. We will discuss it more in the next section when we talk about Triggers. Please bear with me.

Other subtle but very meaningful differences also exist and we will mention these when they are relevant. For example in Apex you can use the array notation to declare arrays or list which are dynamically resizable whereas Java arrays are not dynamically resizable.

It is important to know these differences especially when you are new to Apex. This is important because you require a certain mindset that will make you embrace the governor limits and have fun coding around them. Understanding these differences is key to being successful with Apex. If you have chosen to work with Apex or are required to use it at your job, rather than dwell on how limiting Apex is, you would be better off embracing these limits as a challenge and adopting patterns to help you code around them. Believe me when I say this; you will not only become a better programmer, you will have fun at it.


Auditing Apex Code


Apex code can be developed in one of the following ways;

  • Code Editor on the UI
  • Developer Console
  • Force.com IDE

UI Code Editor


Apex code can be developed using the UI Editor and can be accessed under Setup > Develop > Apex Classes



Developer Console


Another way to develop Apex on the UI is to use the Developer Console. This is a browser-based IDE used to write, test and execute Apex and Visualforce code. Using the Developer Console, developers can also view logs and browse and query the database. Code coverage is an underlying concept of software development in the Force.com platform. You can use the Developer Console to view code coverage for different classes and open up these classes in code editor within the Developer Console to see exactly which lines are covered and which aren't. This can be very helpful when designing unit test such that all aspects of the class are tested.





Force.com IDE



The Force.com IDE is a plug-in to Eclipse that allows developers to write and test Apex code. It is a powerful software development environment used for creating, modifying, testing and deploying Force.com applications. It is based on the Eclipse platform which provides a comfortable environment for developers familiar with IDEs, allowing them to code, compile, test and deploy all from within the IDE. 

You can use the Force.com IDE to create Force.com IDE projects. This is done by setting connections to a sandbox org. This is necessary for reading and writing metadata from and to the sandbox org. The Force.com IDE then offers tools for creating classes, triggers, Visualforce pages, components etc.




Another useful tool for Apex development is the Execute Anonymous view which can be used to quickly evaluate Apex code or write scripts that change dynamically at runtime.

Finally the Force.com IDE also offers a tool used for browsing objects and fields directly within the IDE called the Schema Browser or Explorer. Develpers can use it to view the data model including all relationships between different objects. The Schema also displays information such as object visibility, permissions, data types and lookup values. Finally it can be used to execute queries against the database. It is read-only and does not interact with the rest of the work space.

Software development on the Force.com Platform


Apex code is on-demand which simply means it lives and breaths in the cloud. This is a fundamental aspect to understand about Apex software development. So how does it get to the cloud and how can we retrieve it from the cloud when it is requested?

Whenever a developer writes code and saves it, a compiler on a Force.com server is invoked and checks the code syntax. If the code compiles successfully, it is saved on the server. It the compiler issues compilation errors, the code is not saved and in most cases the developer gets an accurate description of what went wrong so he can immediately fix it. 
But note that this is only possible when using the Force.com IDE to develop code. If the developer is using the UI code editor to develop code, if compilation errors occur the code is lost because it wouldn't be saved on the server. With the Force.com IDE, the code will be saved locally regardless of if compilation errors occur or not.



Code that passes syntax checks is stored in the database. Whenever a User request code, the interpreter is invoked and it executes the code logic and returns the results back to the user. 

This is the main aspect of software development on the Force.com platform. We will still talk about other important aspects of Software development that relate to the Force.com Platform such as the execution context and static variables, Asynchronous Apex with Future Methods, Schedule Apex, Dynamic Apex, etc. These are all core aspects which define the software development Process on the Force.com platform. For a better understanding we will discuss when appropriate.


Common Limits



Due to multi-tenancy in the Salesforce.com Cloud, Apex code is governed to allow the most efficient use of resources that are available to all tenants. These governor limits make sure that no one single tenant can hijack or monopolize resources to the detriment of other tenants. To be a good Apex Software developer, it is absolutely important that you understand what you can and cannot do on the platform. Only then will you be able to adopt design patterns that will enable you to develop amazing apps and be a hero among your peers.

The picture below shows a list of the most common governor limits you should watch out for when developing Apex code:


Apex code governor limits for on the Force.com platform

The limits shown above are only those that apply to software development involving Apex code. Many other different types of limits apply on the Force.com platform and will sometimes be related to the Salesforce.com edition you have purchased.

Always check the documentation for all governor limits enforced by the Apex runtime engine. This is especially important since the limits might change between releases or policies around certain limits. The limits in the image above are pretty self explanatory and I will not discuss limits in detail here. In the sections that follow, I will be mentioning these limits very often and where appropriate also the design pattern, that can be used to code around them. For example, in the Bulk Pattern section when discussing Triggers, I introduce design patterns aimed at coding around the SOQL query, DML statement, heap size and CPU time governor limits. Stay tuned and continue reading as we delve into the body and flesh of Apex code.



Apex Code Framework - Classes, Triggers, Execution Anonymous


Apex Code can be written either as classes for example in Visualforce Controllers, as Triggers which are "triggered" or executed when a record is saved, edited or deleted or finally as anonymous code blocks to perform a variety of task such as debugging, schedule an Apex Job to run etc.

Classes


In Object Oriented Programming (OOP), everything can be modeled as objects which have specific attributes pertaining to the object and exhibit specific behavior. These attributes and behavior can be model using classes. 
Therefore a class can be defined as a: 

"library of attributes and methods that can be instantiated into an object"

An Apex class therefore acts as a template or blueprint from which Apex objects are created. 

1:  public with sharing class LeadCreationController {   
2:    public Integer leadCount { get {return 0;} set; }   
3:    public List<Lead> newLeads { get; set; }   
4:    /* Initialise lead list and create first lead */   
5:    public void createNewLeads(){   
6:      newLeads = new List<Lead>();   
7:      Lead firstLead = new Lead();   
8:      newLeads.add(firstLead);   
9:    }   
10:     /* Initialise the list that will hold leads that are being created */   
11:     public LeadCreationController(){   
12:        createNewLeads();   
13:     }   
14:   }   



Apex classes are similar to Java classes making use of access modifiers when defining variables and methods, making use of the implements and extend keywords to implement interfaces or extend parent classes etc. However the following mean differences exist between Java classes and Apex classes;

  • The private access modifier is the default, meaning if no access modifier is specified for a variable or method, it will be private and will only be accessible from within the class in which it is defined.
  • The public access modifier does not mean the variable or method is visible to the world as in Java, it means the variable or class is visible to Apex code within the application or namespace. This was done to discourage joining of applications because such cross-application dependencies are difficult to maintain.
  • Apex introduces a fourth access modifier global which is not present in Java. In Apex code the global access modifier has the same meaning as the public access modifier in Java. If a method needs to be referenced outside of the application e.g. a Web Service method, the global keyword must be used to allow such access. However use the global access modifier with care.
  • Interface methods have no access modifiers, they are always global.
  • Static variables and methods cannot be defined in inner classes as is the case in Java, they can only be declared in a top-level class definition
  • Inner classes behave like static Java inner classes, but do not require the static keyword in their definition.
  • Methods and classes are final by default. But the virtual definition modifier can be used to allow extension and overrides. If a class is defined using the virtual keyword, the override keyword can be used to override a method defined in the virtual class.
  • The with sharing and without sharing keywords used in the class definitions are unique to Apex code and can be used to enforce the profile level permissions and sharing rules when writing classes that will serve as controllers or controller extensions for specific objects.
  • Exception classes must extend the Exception class or another user-defined exception. In Java this is the case only with checked exceptions which are exceptions that must be explicitly caught or propagated.
  • Finally Classes and Interfaces can defined in triggers and anonymous blocks but only as local


To conclude the discussion on classes, there are a couple of annotations that can be used when defining methods that exhibit special behavior or carry out particular functions e.g. the @future annotation can the used to defined a method as asynchronous meaning that it can be scheduled by the Server to run at some time in the future when the load on the server is low, thereby causing it to run with extended governor limits, or @RemoteAction annotation which is used to designate methods that can be called directly by JavaScript from within a Visualforce page, or @isTest which is used to defined a class or method as being part of a unit Test etc.

Having looked at classes in detail, let's take a look at triggers. 

Triggers


Apex code enables developers to execute business logic when saving, deleting or merging records, so called DML (Data ManipuLation Language) operations. 
So a trigger can be define as an:

"Apex code procedure that automatically executes during any one of the following operations insert, update, delete, merge, upsert and undelete"

As an example, a trigger can be run to check for lead duplicates before inserting new leads into the database. A trigger can be run when creating a Task to update the Open Activity Count on the Account for which the task is being created as shown below:

1:   trigger TaskAfterInsert on Task (after insert) {   
2:     if(!AccountServiceHelper.getOpenActivitiesCounted()){   
3:        AccountService aservice = new AccountService(trigger.new);   
4:        aservice.updateOpenActivityCountOnAccounts();   
5:        AccountServiceHelper.setOpenActivitiesCounted(true);   
6:     }   
7:   }   


Following the definition above, the following trigger events are possible;


  • before insert
  • after insert
  • before update
  • after update
  • before delete
  • after delete
  • after undelete


All triggers define implicit variables that allow developers to access runtime context. These variables are called trigger context variables and are contained in the System.Trigger class. These variables are;


  • isExecuting
  • isInsert
  • isUpdate
  • isDelete
  • isBefore
  • isAfter
  • isUndelete
  • new : list of new versions of the sObject records. Cannot be used in Apex DML operations. Records can only be modified in before triggers. Trigger.new cannot be deleted
  • old : list of old versions of the sObject records. Only available in update and delete triggers apart from in after undelete. Trigger.old is always read-only (cannot be used in a DML operation)
  • newMap : a map of IDs to new versions of sObject records. Only available in before update, after insert and after update triggers. 
  • oldMap : map of IDs to old versions of sObject records. Only available in update or delete triggers.
  • size : total number of records in a trigger invocation, both old and new


We will now look at a few fundamental cocepts of software development on the Apex platform that relate to triggers and DML operations.

Execution Context and Static Variables


Traditionally or in most programming languages static variables are variables that exist for a given class and are shared by all instances of that class. They are often used as general purpose global variables e.g. to share information among all instances of a class. So static variables always exist regardless of whether a class instance is created or not - of course traditionally. This is however not the case in Apex code. In Apex code, static variables have execution context scope lifetime. So the next important question we need to answer is, what is this execution context, that determines how long our static variable lives?

Defining the execution context is kind of tricky but here is my take;

The execution context involves all actions such as arithmetic operations, method invocations, service invocations etc. performed in response to an end user action performed on the UI or other action performed via the API, beginning from the point when the action is initiated and leading up to the point when all DML operations are committed to the database and post-commit logic executes. 

Post commit logic could include sending out emails. Tightly coupled to this definition of the execution context is the order of execution i.e. what happens when a record is inserted, updated, upserted or deleted and in what order. Please refer to the documentation here for a full outline of the order of execution

The order or execution is important because all the operations which take place during the order of execution will typically belong to a single execution context. But this still doesn't answer the question of why it is important. 

Well it is important for two reasons;

  • the execution context is subject to governor limits which are reset at the end of each execution context
  • the execution context determines the scope and lifetime of static variables.


Within triggers, DML operations might trigger other DML operations which might trigger other DML operations and so on and so forth ... I think you get the idea. This means we might get into a nasty recursive operation which will lead to unwanted results such as inconsistent data or more likely we will hit the governor limits faster than we can blink.

So a common design pattern, is to used static variables to control execution flow when doing DML operations.

1:  trigger TaskAfterInsert on Task (after insert) {  
2:       if(!AccountServiceHelper.getOpenActivitiesCounted()){  
3:            AccountService aservice = new AccountService(trigger.new);  
4:            aservice.updateOpenActivityCountOnAccounts();  
5:            AccountServiceHelper.setOpenActivitiesCounted(true);  
6:       }  
7:  }  


1:   public with sharing class AccountServiceHelper {   
2:     public static boolean openActivitiesCounted = false;   
3:     public static boolean getOpenActivitiesCounted() {   
4:        return openActivitiesCounted;   
5:     }   
6:     public static void setOpenActivitiesCounted(boolean value) {   
7:        openActivitiesCounted = value;   
8:     }   
9:   }   


Look at the TaskAfterInsertTrigger above. We check if the openActivitiesCounted variable is set before we perform our counting operation. When we finish we call setOpenActivitiesCounted(true) to set openActivitiesCounted to true. In this way we ensure that the counting operation will be run only once even if the TaskAfterInsertTrigger  trigger is called multiple times for some reason that is beyond our control.

But in my opinion, a lot of time and effort should be put into the design of triggers to avoid or minimize usage of static variables in this way, because this might introduce code maintenance issues. Moreover you should always strive to know the execution flow when doing DML exceptions. For example if you insert a new Contact and a field on Account gets updated as a result,  you will expect the update trigger on account to run exactly once. But what if another developer had added some piece of code that updates a field on Contact when an account gets update?. This will causing the update trigger on Contact to run which in turn updates a field on Account again. This might cause the update trigger on Account to run again and so on. Now imagine many developers working on the same sandbox org with no poor trigger design. This could be the scenario on many standard and custom objects in your org. This can get pretty messy and hard to maintain to a point that a new solution design or complete code re-factoring is required. So please spend a lot of time to think about trigger design before you implement your triggers.

Another important design pattern that revolves around DML operations and governor limits is the bulk pattern design.


Bulk Patterns


"All of your Apex code should be designed to handle bulk operations. All of it - no exceptions". Dan Appleman - Advanced Apex Programming

I totally agree and this is why;

  • DML Limits
  • SOQL Statement limits.

Apex triggers are optimized to operate in bulk, which, by definition requires developers to write logic that supports bulk operations.

Writing Apex code that follows bulk operations entails;

- minimizing the number of DML operations by adding records to collections and performing DML         operations against these collections. A typical collection often used for this purpose is a List.
- minimizing the number of SOQL statements by preprocessing records and generating sets, which can     be placed in a single SOQL statement used with the IN clause.

So another way to say this is:

DO NOT EVER ( NEVER ) put a SOQL query or DML statement in a loop, NEVER and I really mean it, NEVER!.


1:   public with sharing class AccountService {   
2:     public Set<Id> accountIds;   
3:     private final Account acct;    
4:      public AccountService(ApexPages.StandardSetController controller) {   
5:     this.acct = (Account)controller.getRecord();   
6:    }    
7:     public AccountService(List<Sobject> objects){   
8:        accountIds = new Set<Id>();   
9:        for(Sobject obj : objects){   
10:          Id accountId = (Id)obj.get('AccountId');   
11:          accountIds.add(accountId);   
12:        }   
13:     }   
14:     public void updateOpenActivityCountOnAccounts(){   
15:        List<Account> accountsToUpdate = new List<Account>();   
16:        for(Account acct : [SELECT Id, Number_Of_Open_Activities__c, (SELECT Id FROM OpenActivities) FROM Account WHERE Id IN : accountIds]){   
17:          acct.Number_Of_Open_Activities__c = (Integer)acct.OpenActivities.size();   
18:          accountsToUpdate.add(acct);   
19:        }      
20:        update accountsToUpdate;   
21:     }   
22:   }   



In the code snippet above notice that the accountIds set is used to collect all account Ids which are then used in the SOQL query in updateOpenActivityCountOnAccounts function. All accounts to be updated are collected in a list accountsToUpdate and when we are finished with processing the account, we run a single DML update statement to update all accounts in the accountsToUpdate list.

Also notice that the SOQL statement in the updateOpenActivityCountOnAccounts  method is in a for loop, it is a so called SOQL for loop. This is also a common design pattern used to avoid hitting the 6MB heap size limit because SOQL for loops can process records one at a time using a single sObject variable as done in the example above, or in batches of 200 sObjects at a time using an sObject list. 
Also notice that an inner sub SOQL query is used to read the number of OpenActivities on each account thereby avoiding a second SOQL statement. This is also a pattern used to reduce the number of SOQL statements by using relationships between objects when querying for data.

To conclude the discussion on bulk patterns let's talk for a while about unit test. For some reason that I cannot understand, most people tend to ignore bulk patterns when writing unit test. Bulk patterns are equally important in your unit test because they will show you if you code will break or not when someone tries to do a bulk insert or update using the Data Loader for instance. Triggers can receive up to 200 hundred records at once. So always strive to test at least 201 records. 


Asynchronous Apex


It is difficult to talk about triggers and DML statements without mentioning Asynchronous Apex and what it means for the Software development process on the Force.com platform.

Certain DML operations will normally require the processing of many records, so many that it would be impossible to code around the governor limits. A very good example of such a limit is the Maximum CPU time on the Salesforce servers which is 10s. If you make a large number of changes to roles, territories, groups, users, portal accounts, ownership or public groups participating in sharing rules, you will probably hit a couple of governor limits because the calculations required will be very resource intensive and may take a very long time. In cases like this Asynchronous Apex can be used to make sure that such time and resource heavy operations are deferred to the future, at some time when the server has high capacity ( i.e. low load). Normally the server will schedule or queue Asynchronous operations for times when the server load is very low and will therefore increase the limits when running such operations.

Operations which are considered asynchronous in Apex are;

  • future calls defined by using the @future annotation
  • Batch jobs
  • Schedule Apex

Execute Anonymous


From the discussions above we have seen that Apex code can either be written as a class or a trigger. There is a third way to write Apex code, as  block of anonymous code. 

The Execute Anonymous view is added to the Force.com IDE and the Developer Console and is used to run an anonymous block of Apex code on the server. Anonymous blocks can be used to quickly evaluate Apex code or write scripts that change dynamically at runtime. When the anonymous code block is executed, the results can be viewed in the debug log, making anonymous code blocks very useful when debugging code.


Execute Anonymous used to schedule an Apex job


The example above shows how Execute Anonymous can used to write a block of code to schedule an Apex job to run. Also notice the debug statement showing details about the job.

We have discussed a lot of Force.com software developement core concepts. Now let us look at the different ways to invoke Apex code and when to use it.


How to invoke Apex


Apex code can be invoked whenever;


  • DML operation occurs
  • Apex Scheduler schedules  a class
  • Apex Classes run in response to a variety of actions


DML Operations : whenever records are saved via the UI or the API a DML operation occurs. Apex triggers can execute Apex code whenever a DML operation occurs. This is possible because Apex triggers are tied to events such as insert, update and delete. Apex code can then execute immediately before or after these events.

Apex Scheduler : Apex code can also be invoked by creating a job through the Apex Scheduler. The anonymous block could be used to schedule such a job

Apex Classes : Apex code can be invoked when a user is interacting with a Visulforce page. Apex Code in the controller is called to retrieve and store data, control navigation, maintain state, perform validation task etc. 
Apex code can also be invoked when receiving and processing inbound emails. For example in the Email-to-Case Service Cloud feature, an inbound email will invoke an email service and related Apex code to do task like create cases, send out auto-response emails, create task for Service Reps etc.
Apex code can also be invoked through the Web Service API for classes that are exposed as web services using the webservices keyword. The Web Services API can be invoked by any code that supports SOAP Messaging. 

Finally and independent of the possibilities mentioned above, Apex code can also be invoked through anonymous code blocks which are often used for testing and debugging purposes.


When to use Apex



The reason i love Saleforce.com is its' extensive declarative abilities. I love developing software and writing code. But the amount of time you can save by using both declarative possibilities enhanced by Apex code is truly amazing. This is the reason Salesforce.com has gotten me hooked.

As a general rule always exhaust all declarative possibilities before using Apex Code to accomplish more complex task. 

Here are a few use cases for such complex task that can be accomplished by using Apex code;


  • Custom Controllers: Apex code which controls how Visualforce pages behave and what data is available to them e.g use Apex class to create a multi-step wizard which maintains state from step to step and controls navigation. Apex classes could also be called from buttons of links to carry complex task such as applying a single action to multiple targets, performing multiple sequential actions on different targets etc.
  • Web Services: Apex code can be used to call out to SOAP-based and RESTful web services. For example, Apex classes can be used to call out to external web services to perform translations, retrieve data, store data etc. Apex triggers could be used to validate email addresses when creating contacts etc. Apex code can also be used to expose Salesforce applications as web services that can be called by other applications over the internet. For example, an Apex custom web service could be used to perform a custom transactional database operation or a conditional update to a database. This is very useful since the standard Web Service API does not operate transactionally across multiple records and does not have the capability to behave differently based on custom business logic.
  • Field Updates on Other Records: Apex triggers can be used to automate field updates to related or unrelated records e.g. a change to a child contact record, may trigger a change to the related Account parent record or an unrelated record belonging to a custom object Event__c which models some sort of event in which the contact takes part.
  • Data-Driven Sharing: Apex triggers can be used to automate record sharing based on dynamic data-driven record criteria e.g. a field on the record itself could determine if a particular user or group of users are granted access to that record.
  • Automatic Record Creation / Deletion: Apex triggers can be used to automate the insertion or deletion of records e.g a new Task can be created to remind a Sales Rep to follow up on an Opportunity when it reaches a certain stage in the sales process.
  • Complex Validation Rules: Apex triggers can be used to execute validation logic when a record is being deleted. This is not possible to do declarative validation rules since they only fire on insert or update. For example, an Apex trigger could make sure that it would be impossible to delete opportunities in certain stages.
  • Cleanse/Repair existing records: Using Scheduled Apex and Anonymous Apex, you can run batch jobs or scripts to identify and merge duplicate records, and transform existing records into new structures e.g use Apex triggers to perform de-duplication logic.
  • Custom Email Handler: Apex can be used to create a custom email service for inbound emails which can perform actions such as create or update records when an email is received.
  • Custom Report Page: Use Visualforce controllers together with a Visualforce page to create reports with much more granularity or reports with queries so complex that the standard declarative possibilities are insufficient e.g. creating reports on data stored across multiple related list. 


Now that we have looked at a lot of concepts around Apex code including how and when to use Apex, let us look at some core Apex language constructs that are a must-know if you want to be successful when writing Apex code.


Basic language constructs


When we talk about constructs of any programming language we are referring to data types, conditional statements and loops.

We will not be discussing all language types here since most of these language constructs are similat to those in Java e.g. most primitive data types, if-else conditional statements, do-while and while-do loops and the traditional for loops.

There are however some fundamental differences which are unique to Apex code or require a deeper understanding and are absolutely necessary for a good understanding of how Apex code works and should be discussed. These are;


  • ID primitive data type
  • sObject data type
  • collection data types
  • for loops (which we already discussed above)


ID primitive data type


In Apex code the ID data type represents a system generated unique record identifier. Whenever a record is inserted into the Salesforce database, the platform assigns a unique ID to the inserted record. The ID of a record cannot change over the lifetime of that record, even if the record is deleted and undeleted.
Each ID is guaranteed to be globally unique and therefore is the best way to uniquely identify a record.

On the Salesforce Platform, there are two types of IDs, the 15-character case-sensitive ID and its 18-character case-insensitive counterpart with 3 extra characters appended to indicate the case of each of the original 15 characters. The reason the 18-character ID was introduce was so that case-insensitive applications like Excel or Access could use the 18-character IDs to safely check uniqueness of records. For this reason all API calls will return the case-safe, case-insensitive 18-character ID.

sObject Data Type


sObject stands for Salesforce Object and can be defined as;

"a generic data type that is the parent class for all standard and custom objects in Apex"

The ID of an sObject is a read-only value and can never be modified explicitly in Apex unless it is cleared during a clone operation, or is assigned with a constructor. The Force.com platform assigns ID values automatically when an object record is initially inserted to the database for the first time.

As an example, consider the standard Account object. The parent of the Account data type is the sObject. In Apex custom objects are identified by adding the suffic '__c' to the noun identifying the object, e.g. Event__c might be a custom object which is used to manage events. I intentionally write might be, because Event__c could also be a custom field on a standard or custom object. In lookup or master-detail relationships it is quite common to use the name of the object to denote a reference to that object, e.g the custom object Event__c might have a lookup to the Account object since an Account can have many events. On the Event__c object, we will then have a lookup field Account__c to denote the reference to the related account. On the Account object you will then have a list of Events belonging to that Account. This sort of relationship is always denoted by adding the suffix '__r' to the custom object as in Event__r but since this is a 1-to-n relationship where many Events can belong to a single account, we use the plural form when the nouns to denote the relationship on the parent side. So on Accounts you will have Events__r hold a list or Events belonging an Account.

1:  List<Account> accounts = [SELECT Name, (SELECT Name from Events__r) WHERE Id IN : accountIds];  
2:  for(Account acct : accounts){  
3:     for(Event__c event : acct.Events__r){  
4:       ...  
5:     }  
6:  }  



Not that for standard objects, the suffix '__r' is not needed to denote children in and one-to-many parent-child relationship. Contacts belonging to an Account will be reference from the list named Contacts and not Contacts__r. 

SObjects can be created by using the instantiation keyword new. However new will always require a concrete type of the sObject, e.g.

1:  sObject sObj = new Account();  


To convert the generic sObject type sObj to an Account data type, we use casting as follows;

1:  Account acct = (Account)sObj;  


When instantiating a new sObject you can pass in comma separate name-value pairs e.g.

1:  Accounts acct = new Account(Name='Project Jubilee', Website='www.projectjubilee.com');  


Finally there is an sObject class with instance methods that can be called and operated upon by an instance of an sObject such as an account record. For a list of all the methods and what you can do with them check the documentation.

Having looked at the ID and sObject data types which are unique to the Force.com platform, it is time to look at collections and what makes them important for Apex coding on the Force.com platform.

Collections in Apex


When I think of collections, i always have this picture of a data bucket in mind. You can randomly throw things inside if you want (Set), or you could decide that you want them placed in a particular order such that you can retrieve them systematically and in order (List), or you could decided that you want to label or tag your data in a particular way before throwing it into the bucket, such that you could retrieve that piece of data at a later time without having to look through the whole bucket until you find it (Maps)

In Apex we use collections to temporary hold data while it is being processed. You might want to sort data in specific order or use some criteria to get a data subset from the data you read from the database. While it is most often efficient to write your SOQL Queries such that you get exactly the type or subset of data you what, it is not always possible especially if you are looking to aggregate data from many objects and send it to the view. Collections are used to hold data while it is being processed.

Collections are not only used to hold data while processing, but generally used to store data after SOQL queries are performed. This data is then sent to the view in these collections and the view can then use tags like the <apex:repeat> tag to go through the collection and read out the values. This leads to a fast, optimal and light-weight way of handling data on the view.

The three types of collections used in Apex are


  • List
  • Set
  • Map

List


Lists are implicitly indexed and because of this they are also implicitly ordered. This means that each time an element is added to a list it is assigned a unique index. For this reason, a list can hold non-unique values, since these values can be distinguished by their indices.

Use Case: use list to store results from SOQL and SOSL queries. In the case of SOSL the result is a List of List of sObjects.

Also use lists to hold records so you can perform a DML operation. Only records of the same type can be held in a list e.g  you could do something like this in a test class

1:  public static final NUMBER_OF_CONTACTS = 201;  
2:  private void addContacts(){  
3:       List<Contact> contactList = new List<Contact>();  
4:       for(i = 0; i<NUMBER_OF_CONTACTS ; i++){  
5:            contactList.add(new Contact(Name = 'Test_Contact_' + i;  
6:       }  
7:       try {  
8:            insert contactList;  
9:       } catch(Exception e) {  
10:            system.debug('The following error occured while inserting contacts ' + e);  
11:       }   
12:  }  


Creating Lists


A list can be created in two ways;

by using the standard list declaration syntax as in

1:  List<Account> accounts = new List<Account>();  
2:  accounts .add(new Account(Name = 'Test Account');  


by using the array notation as in

1:  Account[] accounts = new Account[](new Account(Name = 'Test Account 1', new Account(Name = 'Test Acount 2'));  


Notice that the list are populated in different ways. Also note that the array notation is same as in Java but in Apex code, a list created using the array notation is dynamically resizable, whereas in Java it is not.

Also note that List are not arrays even though they can be syntactically referenced as arrays

Finally go here for a list of methods that are available in the List class.

Set


An Apex Set is an unordered, unindexed collection of unique elements. Unlike lists, sets do not have an index to distinguish duplicate values.
The best use case for sets is in SOQL queries where they are used to collect a set of Ids for which to perform the SOQL query e.g.

1:  public with sharing class AccountService {  
2:       public Set<Id> accountIds;  
3:       public AccountService(List<Sobject> objects){  
4:            accountIds = new Set<Id>();  
5:            for(Sobject obj : objects){  
6:                 Id accountId = (Id)obj.get('AccountId');  
7:                 accountIds.add(accountId);  
8:            }  
9:       }  
10:       public void updateOpenActivityCountOnAccounts(){  
11:            List<Account> accountsToUpdate = new List<Account>();  
12:            for(Account acct : [SELECT Id, Number_Of_Open_Activities__c, (SELECT Id FROM OpenActivities) FROM Account WHERE Id IN : accountIds]){ //heap size limit (6MB)  
13:                 acct.Number_Of_Open_Activities__c = (Integer)acct.OpenActivities.size();  
14:                 accountsToUpdate.add(acct);  
15:            }       
16:            update accountsToUpdate;  
17:       }  
18:  }  


In the code sample above the set accountIds is populated in the AccountService constructor and later used in the updateOpenActivityCountOnAccounts function to retrieve only accounts whose Id belongs to the set. This is done so that instead of performing a query for each of the Ids in the set, you can perfoem a single SOQL query to retrieve all records with Ids in the set. This is a typical use case for sets that you will see in a lot of SOQL queries.

Finally go here for a list of methods for working with sets.

Map


A map is a collection of key-value pairs in which  each unique key maps  to a single value. Values in the map do not have to be unique. But every value in a map has an index or key which has to be unique.

1:  Map<String, OpportunityTeamMember> teamMemberMap = new Map<String, OpportunityTeamMember>();  
2:  List<OpportunityTeamMember> members = new List<OpportunityTeamMember>([  
3:       SELECT id, User.Email, User.Name, TeamMemberRole, OpportunityAccessLevel   
4:            FROM OpportunityTeamMember   
5:                 WHERE OpportunityId =: this.opportunity.id  
6:  ]);  
7:  for(OpportunityTeamMember otm : members){  
8:       teamMemberMap.put(otm.id, otm);  
9:  }  


In the above code snippet, we define a map which hold opportunity team members. The keys of the maps are the Salesforce Ids of the opportunity team members which makes them unique. So if the Id of the opportunity team member is known, a call to the get method of the map will return the opportunity team member. Maps are mostly used as buckets of information.

Take note that with maps, you will not get any exceptions or errors if you try to add an already existing index or key to the map. What is there is silently overridden. This is a potential of a bug, so beware.

Finally go here for a list of methods available for working with maps.

Dynamic Apex


Dynamic Apex comes in 4 different flavours


  • Schema Describe
  • Dynamic SOQl
  • Dynamic SOSL &
  • Dynamic DML


Schema Describe


You can use the Schema Describe to programmatically learn about the Metadata of your data model and current org schema. The information you can discover using Schema Describe include, top-level objects, their fields, record types etc.

For example, we can use the following code to return a map of all sObject names or keysto sObject tokens or values defined in an organization.

1:  Map<String, Schema.SobjectField> globalDescribe = Schema.getGlobalDescribe();  


I use Schema describe most in two ways;

To get the picklist values of fields on objects and use them for drop-down menus on Visualforce pages. These menus can be used to filter or group information e.g. show me all Opportunities with stage "closed-won". The advantage using the Schema Describe in the is way is, whenever the picklist values change, the code will not be touched. If you end-to-end system is completely dynamic, your records should be filtered using a newly added picklist value without any one having to do anything else.

1:  public List<String> opportunityStages {  
2:       get {  
3:            if(opportunityStages == null){  
4:                 opportunityStages = new List<String>();  
5:                 Schema.Describefieldresult stageNameField = Opportunity.StageName.getDescribe();  
6:                 List<Schema.Picklistentry> stages = stageNameField.getPickListValues();  
7:                 for(Schema.Picklistentry stage : stages){  
8:                      opportunityStages.add(stage.getValue());  
9:                 }  
10:            }  
11:            return opportunityStages;  
12:       }  
13:       private set;  
14:  }  



I also use Schema Describe very often to dynamically retrieve information about the record types for a particular object. Imagine you want to filter your records based on record types and you have only the names of the record types. But your query returns a reference to the record type, i.e. the record type Id. You can use the getRecordTypeInfosById() method of the Schema Describe to get am Map with the record type Id as key and RecordTypeInfo as value. Using a the record type Id of the record you can retrieve the specific record type from the map and use the getName() method to retreive the name of the record type you require. All of this without having to query the Database. Pretty cool.

1:  public static Map<Id, Schema.RecordTypeInfo> opportunityRTMap = (Schema.SObjectType.Opportunity).getRecordTypeInfosById();  
2:  String recordTypeId = '...';  
3:  opportunityRTMap.get(recordTypeId).getName()  


Please do not ever do any picklist or record type describes within loops. This is because there is a limit to the number of picklist and record type describes that can be performed within a single execution context i.e. 100 as of this writing.

Note that the use of Schema Information is not limited to Apex. In Visualforce pages you can use the global variable $ObjectType to get access to a variety of Schema infomation. An example could be using $ObjectType to the labels of fields and use them as header for table columns or in conjunction with <apex:inputText> tag elements.

There is refer to the Schema namespace for classes and methods which provide schema metadata information.


Dynamic SOQL


In a normal SOQL statement we will usually know what we want to query for when developing our application such that we can hard cord field names and objects into our SOQL statement.

But what if the query we want to write requires input from the user of our application? This is where dynamic SOQL comes in. It gives us the flexibility to create SOQL statements during runtime. These SOQL statements are then strings that can be extended using field names or information provided dynamically by the user of the application at runtime. For example;

1:  public static final String EVENT_FIELDS_STR = ' Id, Name, Event_Name__c, RecordType.Name, ownerId, Type__c, Location__c, Start_Date__c, End_Date__c, CreatedDate, LastModifiedDate';  
2:  private static List<Event__c> sessionsWithConflictingSpeakers(accountId, SL_SortWrapper sortWrapper)  
3:  {  
4:       String queryStr = 'SELECT ' + EVENT_FIELDS_STR + ' FROM Event__c WHERE Account__c \'' + accountId + '\'';  
5:       queryStr += ' AND Id IN :sessionIdsWithConflicts';  
6:       if (sortWrapper != null) {  
7:            queryStr += ' ORDER BY ' + sortWrapper.getSortField() + ' ' + sortWrapper.getSortDirection();  
8:       }  
9:       return (List<Session__c>) Database.query(queryStr);  
10:  }  


From the above we retrieve information from the custom object Event__c at runtime. The user supplies the account Id for the events he wants to retrieve and how they should be sorted. So the account Id, the sort direction and the field to use for ordering are all supplied at runtime and therefore we use the power of dynamic SOQL to build and execute our query.

The Database.query method is used to execute dynamic SOQL at runtime. Another Database method used in conjunction with dynamic SOQL is the countQuery(String) method. Refer to the Database class here.

SOQL Injection


Talking about dynamic Apex without mentioning SOQL injection, is like talking about good without evil, just does't exist. In my own words;

SOQL injection is a technique that can be used by users to cause the database to execute methods or transactions that were never intended by the current piece of code or current functionality being run

Users can achieve this by passing SOQL statements into your script, which will then be executed by the Database.query method. This can occur when you expect input from users of your application in order to build your dynamic query string. So if you do not want to be fired and possibly beaten and jailed or even killed for stupidity and recklessness, you as a developer have to handle SOQL injection ALL THE TIME - NO EXCEPTIONS.

Luckily enough for you and me, Apex provides the escapeSingleQuotes method which can be used to prevent SOQL injection. This method adds the escape character (\) to all single quotation marks  in a string that is passed by the user of the application. In this manner all single quotation marks are treated as enclosing strings and not as database commands. So guess what, you get to keep your job and avoid being beaten and jailed. Isn't that awesome.

Finally take note that it is always better to design your whole application in such that security issues one of which is SOQL injection is never going to be possible no matter the circumstances. Use Salesforce data Accesss and Sharing model intelligently so that users can see and modify only what they are allowed to see and modify. Because the APIs open access more and therefore more potential security loop holes, you should at the least, use "with sharing" to enforce sharing, permissions on objects and role hierarchy and apply best design and coding practices to secure your application. Always remember that to prevent is better than to cure, cause it might be late when the disease is discovered.

Dynamic SOSL


SOSL stands for Search Object Structured Language which means it is a language construct used to implement searches across single or multiple objects. Searches often required parameters or keys which define what to search for. Most often than not, these search parameters will come from the end user of your application. Which makes user searches a solid use case for dynamic SOSL.

Dynamic SOSL offers developers with flexibility when performing searches by allowing SOSL creation at runtime using APEX scripts. In this way SOSL statements can be implemented to perform searches without hardcoding object or field names in the SOSL statement.

1:  public static List<Suggestion> getSearchSuggestions(String searchString){  
2:       searchString = String.escapeSingleQuotes(searchString);  
3:       List<List<sObject>> searchObjects = [FIND :searchString + '*' IN ALL FIELDS RETURNING Account (Id, Name), Opportunity(Id, Name), Contact(Id, Name)];  
4:  }  


In the above example searchString is supplied by the using maybe by typing a piece of text in a textbox on a Visualforce page and clicking on a search button.

Remember what I said above about being fired, beaten, jailed and possibly worse ... well same thing applies here. You have to use the escapeSingleQuotes method to escape the user input before using it in your SOSL statement. If you don't you open the door for possible SOSL injection which may lead to you being ... well you know (been repeating myself too much).

Dynamic DML


DML means Data Manipulation Langauge. So dynamic DML provides developers with the ability to perform DML operations on sObjects dynamically. The most important thing to know about dynamic DMl is that it allows us to perform DML operations on sObjects without knowing the concrete object type.
This is pretty powerful stuff, because we can write generic functions that perfroms DML operations on sObjects which could be of any type. The generich function ModifyAnySbject in the example below does just that:

1:  public static ModifyAnySbject(sObejct recordToUpdate, String fieldToUpdate, String newValue)  
2:       recordToUpdate.put(fieldToUpdate, newValue);  
3:       update recordToUpdate;  
4:  }  
5:  ModifyAnySbject(new Account, 'Name', 'Test Account');  


Another example that I have had in the past was updating hundreds of records with each record having about 30 fields that had to be updated with new values. I could write each line of code to update each of the thirty fields which will be quite cumbersome, error prone and just plain boring. But luckily enough for me the fields had been designed such that the field names could be constructed dynamically. Perfect case for using dynamic DML. It looked something like this:

1:  ObjectToUpdate__c objUpdate = new ObjectToUpdate__c();  
2:  for(ObjectWithNewValues obj : objNewValues){  
3:       baseFieldName = obj.Region__c + '_' + obj.Country__c + '_' + obj.City__c + '_' + obj.Division__c;  
4:       fieldName = baseFieldName + '_variable_part__c';  
5:       objUpdate.put(fieldName, obj.newValue);  
6:  }  


So in this way I could update all my fields by dynamically constructing the field names and using dynamic DML to overwrite the old vlaues on objUpdate. You might be wondering where the update is in the above code snippet. Mind you this is just pseudo code. However my concrete case involved calling this code from a before update trigger, meaning the update was done implicitly after the before trigger was called. See the order of execution for more on this.

Testing and Deployment


Testing and Deployment both play a very important role in APEX software development on the Force.com platform. So in this section we will briefly discuss both.

Deployment to any production environment require 75% code coverage for a successful deployment. Code coverage is calculated by dividing the number of unique Apex code lines executed during your test method execution by the total number of Apex code lines in all of your trigger and classes. (Note: these numbers do not include lines of code within your testMethods if you use the @isTest annotation).

Due to the importance of testing on the Force.com platform, it is essential to develop the right attitude towards testing from the very onset. But try not to look at it as only meeting deployment requirements. In my experience, in big projects, despite extensive, functional, UAT and regression testing, QA always misses something especially if the application is very complex. So you as a developer unit testing your own individual piece of code is very important, so that when all the pieces fit together at there are no major surprises. So please learn what the best practices for unit testing are and how to write proper test code.

Testing


Every software or in more every application development process on the Salesforce.com platform should include 3 different stages of testing which should suitably be carried out in 3 different environments;


  • Unit testing
  • Integration Testing
  • Functional Testing & UAT


In the Apex Software process Unit Testing is what we focus on. So what is unit testing? I think what Wikipedia says about unit testing is pretty accurate.

In computer programming, unit testing is a software testing method by which individual units of source code, sets of one or more computer program modules together with associated control data, usage procedures, and operating procedures are tested to determine if they are fit for use.

Our discussion of unit testing in Apex will focus on some on the points mentioned in the definition above; testing individual units of code and the usage of realistic associated control data.

Unit Testing Framework

Here are a few things to note when it comes to writing test classes

@isTest: use this annotation to define your test classes. This marks your class as a test class and all the code written in this class will not count against your organizations' code limit.

@isTest(SeeAllData  = true): use this to open data access to data in your organization to your test methods. Use with exceptional care. Your test data should not depend on any data existing in your org else you will  get some serious problems when deploying to another org if the same data does not exist there. Just don't do it. Create your own test data.

testMethod: all test methods should be defined as static, void and with the testMethod keyword. A test method can be defined using this keyword in any Apex class but not in a trigger.

Test.startTest(): you can also use this method with stopTest to ensure that all asynchronous calls that come after the startTest method are run before doing any assertions or testing. Each test method is allowed to call this method only once. All of the code before this method should be used to initialize variables, populate data structures, and so on, allowing you to set up everything you need to run your test. Any code that executes after the call to startTest and before stopTest is assigned a new set of governor limits.

Test.stopTest(): each test method is allowed to call this method only once. Any code that executes after the stopTest method is assigned the original limits that were in effect before startTest was called. All asynchronous calls made after the startTest method are collected by the system. When stopTest is executed, all asynchronous processes are run synchronously.

System.assert(), System.assertEquals() etc: it really does amaze me when I come across test methods without a single assert. So how do you know if your code is doing what it is supposed to do? If you are not using asserts in your test code, you are probably not writing effective tests - as simple as that. Use asserts to check if your code is doing what it is expected to do.

runAs(User): Use this method to test how your code interacts with your organizations sharing model by changing user context when testing. Please note that runAs(User) only enforces and verifies sharing and data access. It does not validate CRUD or Field Level Security (FLS) permissions.

Best Practices for writing unit test


  • Test should focus on unit testing all functionality and not on reaching the 75% threshold for deployment. Instead aim for 100% test coverage to rock solid applications.
  • Each method in the class you are testing should have a corresponding test method
  • User asserts to test proper behavior of code
  • Apply bulk testing to your test methods. Use at least 200 records in your test since all Apex code runs in bulk mode
  • Write portable test. This implies you should create your control test data. No hard coded sObject Ids or references to data unique to the org in which the test is running.
  • Avoid using the admin profile as much as possible when testing business logic. Use runAs(User) to test the business logic within specific user context (with the right sharing and data access)
  • Use Test.startTest() and Test.stopTest() to test governor limits. 


Running Test 


There are mainly three ways to run,

  • Salesforce UI
  • Force.com IDE
  • Developer Console


Salesforce UI

You can run all test via the UI by going to Setup > Develop > Apex Classes

Run All Tests
                                 

You can also execute single test or all test by going to Setup > Develop > Apex Classes > Text Execution and selecting the test you want

Select one or more tests to run


Force.com IDE

On the Force.com IDE navigate to the test class you wish to test and right click. Chose
Force.com > Run Tests

Run Tests on Force.com IDE


When the test finishes, you can view the results on the Apex Text Runner tab. On the left side you can see the  clear or re-run test and can see the code coverage for all the classes. On the right side, you can take a look at the debug log.

Test Run results


Developer Console

On the Developer Console go to  Test > New Run and select the tests you wish to execute and click run.

Chose the tests to run


When the test finishes, you can see the status of the test run in the Test tab. You can also see the test coverage data for all classes here. Clicking on any class opens in in the main section and you can see which lines are covered and which aren't. The covered lines are marked in blue and the uncovered lines in red.
You can drill down to any test method that was run and click on it to open the run details in the main section of the window.
View test run results and code coverage on the Developer Console


Deploying Apex Code


Production environments are the runtime environment for Apex code. Once code is compiled and persisted to an Org, it is live and can immediately be used by users. This is why release management is such an important process because it allows the transition from deployment to production with some level of change control.

An ideal Apex software project should be set up in such a way that all the three types of test are possible;

  • unit testing
  • integration testing
  • functional testing and UAT


After the developers complete their implementation and unit test their code, they deploy this code preferably to another sandbox where code from all developers is integrated for integration testing. After regression, functional test, load test, regression testing, UAT etc is performed preferable in a full sandbox.  After this stage, the code can be deployed to a production environment.

Note: Some components such as queues, time triggers, time-based workflow rules etc. are not 100% covered by the Metadata API. Such components cannot be automatically deployed and need to be manually created in the target production org.

To perform Apex code deployment, anyone of the following tools or methods can be used

  • Force.com IDE
  • Force.com Migration Tool
  • Change Sets (related orgs)


Force.com IDE


Deploying Apex Code on the Force.com IDE takes the following steps

Step 1: Right click on the src folder and select src > Force.com > Deploy to Server
Step 2: Enter username and password and if applicable the security token (login in from outside the trusted IP range). Finally select the target org
Step 3: Select the "Project archive" checkbox to create a zip file of the local metadata. Select the "Destination archive" checkbox to create a backup of the current state of the production org. The destination archive is very important since it alllows us to roll back if the need arises.
Step 4: A deployment list is deployed which shows any changes between the source and the target orgs. Any new components are marked in green, existing but modified components in yellow and deleted components in red (i.e. components existing in the target org but no longer in the source org). Any components which are the same in both orgs are displayed in gray. Select components that should be deployed by clicking the checkbox before the components.
Step 5: Click on Validate Deployment to catch any testing issues or errors
Step 6: If the validation fails, a pop up displays the failure message. You can view logs for details and the fix such issues. If validation succeeds, a popup displays a success message. After successful validation, deploy the code

Force.com Migration Tool


The Force.com Migration tool is an ANT-based, extensible command line tool, which allows developers to use scripts to deploy Apex code. This is suitable more technical people who prefer to do deployment using scripts. The different components which make up the Force.com Migration tool are discussed below;

build.properties: this file contains the login details such as username, password and ULRs of the target org. If you are outside a trusted IP range you will need a security token which you need to append at the end of the password.

build.xml: contains the build.properties file which allows developers to overwrite default values for the local build environment without having to modify the file. The build.xml can contain one or more target orgs. These target orgs have task which will be performed as part of the automation script such as runAllTest task. When ever we run the Force.com migration tool we specify which target we want to build and the target run various task as part of the build procedure.

Deploy and Retreive: Salesforce has customised ANT to add the deploy and retrieve tasks. Deploy and Retrieve can be used to create, update and retrieve metadata. Deploy is used to gather or create a set of metadata objects and update or create the corresponding object in the target org, while retrieve is used to download a set of metadata objects into a set of local files.

Execute: execute the deployment script as part of batch file from the command shell by first running ANT followed by the name of the target org from the directory where build.xml exist. This will first execute all targets that the specified target depends on and then run the target itself. The target will automatically run the related task

For more information on deployment using the Force.com Migration Tool check the pdf or go here.

Change Sets


Apex code can also be deployed via the UI using change sets. Change sets are means by which a source or can send customization like Apps, Objects, Code, Reports or Email Templates to another target org.

Here are the things you need to know concerning change sets


  • Change sets can only be used to deploy metadata between related orgs such as a sandbox and a production org or two sandboxes created form the same organization.
  • Apex code must meet unit tests requirements
  • To send a change set from one org to another you must set up a deployment connection between the two orgs
  • On the source org you create an outbound change set and upload it to the target org
  • Once a change set has been uploaded it cannot be modified
  • On the target org you go to inbound change sets and validate and deploy the change set
  • A change set is deployed in its entirety or not at all


Conclusion


We have touched quite a lot of different aspects of software development on the Force.com platform using Apex Code. We started with a definition of Apex and looked at its' characteristics briefly. We then looked at the relationship between Java and Apex. We looked at the different possibilities to write Apex code which are the UI Editor, Force.com IDE and the Developer console. What it really means to develop software on the Force.com platform was discussed followed a brief discussion on common governor limits. Apex code framework was looked at while discussing various patterns and design considerations, e.g asynchronous Apex and Bulk patterns. How and when to use Apex was looked at. Basic Apex language constructs were discussed followed by a detail discussion on dynamic Apex. We then finalize this introduction to Apex by looking at testing and deployment of Apex code.

As you can see we covered quite a lot of topics in this post. It was a difficult write and I suppose a difficult read as well. I hope you were able to get something out of it and more importantly know where to go for more information. Just follow the links to the documentation added on relevant sections of the page.

What we have covered here is just the tip of the iceberg. I encourage you to study all topics relevant to you in more detail.

As usual, I encourage you to challenge my views, start a discussion around something the you find particularly interesting. I salute and thank you for reading this far.

No comments:

Post a Comment