IsolationLevel.RepeatableRead to prevent duplicates

By : lorddev

I'm working on an application that is supposed to create products (like shipping insurance policies) when PayPal Instant Payment Notifications are received. Unfortunately, PayPal sometimes sends duplicate notifications. Furthermore, there is another third-party that is performing web-service updates simultaneously when they get updates from PayPal as well.

Here is a basic diagram of the database tables involved.

// table "package"
// columns packageID, policyID, other data...
// table "insurancepolicy"
// columns policyID, coverageAmount, other data...

Here is a basic diagram of what I want to do:

using (SqlConnection conn = new SqlConnection(...))
  sqlTransaction sqlTrans = conn.BeginTransaction(IsolationLevel.RepeatableRead);

  // Calls a stored procedure that checks if the foreign key in the transaction table has a value.
  if (PackageDB.HasInsurancePolicy(packageID, conn))
    return false;

  // Insert row in foreign table.
  int policyID = InsurancePolicyDB.Insert(coverageAmount, conn);
  if (policyID <= 0)
    return false;

  // Assign foreign key to parent table.  If this fails, roll back everything.
  bool assigned = PackageDB.AssignPolicyID(packageID, policyID, conn);
  if (!assigned)
    return false;

If there are two (or more) threads (or processes or applications) doing this at the same time, I want the first thread to lock the "package" row while it has no policyID, until the policy is created and the policyID is assigned to the package table. Then the lock would be released after the policyID is assigned to the package table. It is my hope that the other thread which is calling this same code will pause when it reads the package row to make sure it doesn't have a policyID first. When the first transaction's lock is released, it is my hope that the second transaction will see the policyID is there and therefore return without inserting any rows into the policy table.

Note: Because of the CRUD database design, each the stored procedures involved either Read (select), Create (insert), or Update.

Is this the right use of RepeatableRead transaction isolation?


By : lorddev


It would be safer and cleaner if insert into Policy just hit some uniqueness table constraint on attempt to insert duplicate. Raising isolation level can lower concurrency and lead to other nasty issues like deadlocks.

Another way is to always insert Policy row, then roll it back if Package has been attached to a Policy already:

begin tran (read committed)

/* tentatively insert new Policy */
insert Policy

/* attach Package to Policy if it's still free */
update Package
  set Package.policy_id = @policy_id
  where Package.package_id = @package_id and Package.policy_id is null

if @@rowcount > 0

This works best when conflicts are rare, which seems to be your case.

I believe you're actually wanting Serializable isolation level. The problem is that two threads can get past the HasInsurancePolicyCheck (though I have no idea what InsurancePolicyDB.Insert would do or why it would return 0)

You have many other options for this as well. One is using a message queue and processing these requests serially yourself. Another is to use sp_getapplock and lock on some key unique to that package. That way you don't lock any more rows or tables than you must.

This video can help you solving your question :)
By: admin