This project is read-only.

FAQ


Q1. Would it be possible to handle a case where uploading took longer than time interval specified in shared access signature on container?

Ans. Such a case is not supported. The upload control knows only a single detail about your storage account and that is your shared access signature. This keeps your credentials safe even if someone tries to disassemble this control (which is easy, given that Silverlight control gets downloaded on client machine). The control should only be exposed to the user for upload only when the upload task is pending and SAS should be generated for a time long enough in which the slowest upload is expected to take place. Another way of removing the time interval constraint I can think of is intercepting the request sent through a WCF proxy, but this would make the uploads slower.

Q2. Would it be possible to specify what kind of changes this solution would undergo when ported to SL5 with TPL built-in?

Ans. SL5 would remove dependency on PortableTPL library. The only code change required would be to uninstall the PortableTPL nuget and add reference to System.Threading.Tasks. For complete control I expect TaskScheduler with all functions to arrive, which would give us control on limiting the concurrency that we desire. However, when I observed the uploads taking place through fiddler, no more than 6-8 threads were spawned in parallel for upload, therefore the Thread Pool is itself managing the concurrency efficiently and it just might be an additional feature that we may get to integrate when TaskScheduler with complete functionality becomes available. An instance of upload through fiddler is inline:

FAQ-1

 

Q3. What are the considerations and/or possible changes in this solution for uploading large files? The code appears to be keeping in memory all chunks resulting from splitting original file before uploading to blob storage.

Ans. I have tested this application for uploads up to 200Mb. For larger uploads the solution may be modified to read first 200 Mb., upload the blocks in parallel, while the FileStream object Seeks another chunk for upload. This would be a change in the functionality of splitting function and it is not currently present in the solution. We need to keep large chunks in memory to speed up the upload process since seeking a position in file, reading bits and uploading chunks is a slower process than reading blocks from memory and uploading chunks.

Q4. What would prevent a race condition in StartUpload method where each loop iteration selects first datapacket where (uploadPacket.IsTransported == false)? The relevant piece of code:
              if (concurrencyLevel < this.packets.Count)
              {
              var uploadBlock = (from uploadPacket in this.packets
                             where uploadPacket.IsTransported == false
                             select uploadPacket).FirstOrDefault();
              }

Ans. There would be no race condition as this code block is run on a single thread and the thread assigns the upload block to a new Task. The IsTransported identifier identifies the file chunks available for upload.

The process is:

FAQ-2

The state works as:

image

StartUpload() picks only that element whose IsTransported value is set to false. It assigns this block to a thread after setting this value to null. ReadHttpResponseCallback() now has a block with IsTransported set to null, which it would set to true only on successful upload. Now when the loop inside StartUpload() runs again, it won't pick the same block, since IsTransported is either null, if it has been assigned to a thread, or true, if it has been uploaded.
This was the reason to make IsTransported variable tristate. Each thread's ReadHttpResponseCallback has just a single upload block assigned to itself, so it can't also change the value of some other element.

Last edited Sep 14, 2011 at 8:40 PM by shalabh_dixit, version 2

Comments

No comments yet.