Oracle PL/SQL AWS S3

Amazon Web Services (AWS) Simple Storage Solution (S3) has upgraded its application programming interface (API) to support HTTPS. This upgrade involved a redesign of the authentication process at AWS. The authentication process now involves authenticating the user and verifying the canonical request made to AWS S3.

Morten Braten and Jason Straub have published and supported tools related to AWS S3 and Oracle PL/SQL. These tools, located within the Alexandria Library, do not support the new AWS4 signature and do not support HTTPS.

During the recent months, I have researched the AWS interface and written an updated package. I have hosted these tools at my github . I think with solid collaboration and greater testing from our Oracle and APEX community we can get this package solid enough to be included in the library.

Why?

Answer #1

At Storm Petrel, we use AWS S3 to augment the database for the storage of documents in our various document management tools. We keep an original in a BLOB and a modified copy at S3. We have wanted to make sure that ALL communication between the users and our infrastructure is 100% encrypted. With the link to AWS S3 travelling via HTTPS, we can stand behind this statement.

Answer #2

Initially, AWS S3 was all hosted from the Northern Virginia region. S3 is now hosted in 15 regions. Some of these regions require HTTPS and some require the AWS4 signature. It seems that AWS will want to depreciate the HTTP interface in time. Best to stay ahead of the demand!

Answer #3

This was well out of my comfort zone. In 2015, we wrote an interface to MailChimp. In 2016, interfaces to FEMA and to Nexmo for SMS and multifactor authentication. The AWS S3 stuff was tough!!!!

What is next?

I focused on getting the feature we use in place. AWS S3 has a huge number of features. This package does not even have all of the features that Morten put in to the original effort. So, testing and continued development is what is next.

I can’t do it alone. The foundation is here. The authentication stuff and HTTPS stuff works. Now to add the other bits that S3 has to offer, or that we need.

GitHub!

Please direct yourself to the Github site for more information about the package, and to get the packages. At that site you will find:

  • Documentation
  • The package (specification and body)Oracle PLSQL AWS S3
AWS4 Signature

-/qed

Oracle APEX AWS S3

I must tip my hat to the wicked smart people on this road ahead of me. Thanks to all of you! We are moving our commercial APEX applications from shared hosting to Amazon Web Services. Due to the constraints of APEX, we setup our own Oracle forgoing RDS. We ran into two problems. One problem, also (magically) appeared at our hosting service. The other one has been doggin’ my heels for months and months.

Request Time vs Current Time

ORA-20000: The difference between the request time and the current time is too large.

Researching this one was annoying. I ran queries to prove to myself that the times were aligned.

 select 
	to_char(sysdate,'MM/DD/YYYY HH24:MI:SS AM'), 
	sessiontimezone, 
	current_timestamp,
	amazon_aws_auth_pkg.get_date_string  
 from dual;

The GMT (UTC) offset in AMAZON_AWS_AUTH_PKG is written as an integer, and apparently a positive integer. Eastern Time is typically shown as -5. For this setting, it is entered as a 5. Where is the daylight saving adjustments? If the hosting service pops back and forth for daylight savings, this value remains static. The solution that seems most stable is to set your APEX host to use Zulu as the timezone. Set, forget.

When we modified the g_gmt_offset to 0 and the server clock to UTC, the problems disappeared.

S3 Bucket Names

ORA-20000: The request signature we calculated does not match the signature you provided. Check your key and signing method. This error message has followed me for six to seven months. It appeared with some regularity. It takes little effort to save the Oracle schema name into an variable with:

:P7_BUCKET := GET_SCHEMA_NAME;

Now when you through this variable at the following query, things get fail.

select 
  key as "Docs" 
from table(AMAZON_AWS_S3_PKG.GET_OBJECT_TAB(p_bucket_name => trim(:P7_BUCKET),p_prefix => NULL ,p_max_keys => 1000 ))

There is no joy in this effort.

Trial and Failure

The resolution appears to be how the S3 Buckets are named. If Buckets are named in small letters, they work far better. When I use a trim(lower(:P7_BUCKET)) in APEX and have a corresponding S3 Bucket in lower case, I avoid problems. I use upper case only to be served the error message shown above stating signatures don’t match.

Scott Spendolini wrote the clearest description of the process his blog.

Jeff Kemp entry from March 2012 is here.

Morten Braten has an entry from March 2011.

-qed-