• Skip to primary navigation
  • Skip to main content
  • Skip to footer

Storm Petrel, LLC

  • Storm Petrel LLC
  • Products
    • All Products
    • Tempest-GEMS
    • Tempest-Time | Time & Expense Tracking
  • Services
    • All Services
    • GSA
    • Customer Service
  • Blogs
    • Grants/Tempest-GEMS Blogs
    • Oracle Blogs
  • Privacy Policy
  • About

Oracle APEX

Oracle PLSQL Template for RESTful API

21 May 2021 by Christina Moore

Abstract

A key purpose of this blog entry is to communicate that we are publishing a template for PL/SQL-based API. It is located on my GITHub page (https://github.com/cmoore-sp/plsql-api.git) and communicates with a public website hosting public APIs.

Introduction

My first interesting exploration with writing API with Oracle APEX and PL/SQL involved exchanging data and documents between our database and AWS S3. It has been four years since I published that effort. In the ensuing years, our team continued to write API. We have benefited and appreciated the progressive improvements that Oracle made with writing these digital connectors. Release by release, year by year, Oracle smooths the process of connecting via HTTP and managing JSON data. In APEX 20.1, the team introduces us to APEX based wizard to exchange data via HTTP links.

A key purpose of this blog entry is to communicate that we are publishing a template for PL/SQL-based API. It is located on my GITHub page (https://github.com/cmoore-sp/plsql-api.git) and communicates with a public website hosting public APIs. I initially worked through the US Weather Service’s site. But when I hit it from Oracle, it denied me for the lack of authentication whereas I easily got data from Postman. C’est la vie. We’ve been using and refining this template for about 18 months, maybe longer.

Rarely is our goal writing code line by line. I reviewed our own work related to a data exchange link to the U.S. government’s FEMA data site. The version I wrote with the best tools at hand required 545 lines to parse JSON data. In 2021, that same effort took 124 lines. A quarter of the code to achieve the same results. The code is simpler, shorter, and easier to read. That all boils down to fewer errors and reduced development costs.

This is the first of a series blog posts related to writing application programming interfaces with Oracle PL/SQL. As I write this, I celebrate the APEX based efforts, which reduces code even further.

This blog is part of a series. The first dealt with error trapping and debug logging. That can be found here: Oracle APEX Standardize Error & Debugging

Design and Build an API with Oracle

Early Days

Step 1 research and explore. And do your early explorations with tools such as cURL or Postman or some tool other than Oracle. Sure, Oracle is the final destination so the data can mix happily with your tables. Tools like Postman provide better debugging and visibility into the feedback from the API host.

Frankly, the error messaging from Oracle is a bit too blunt – non-specific and incomplete. Postman-like tools provide granular assessments including status (200-OK), time, size. Postman provide a cURL version of the effort and the full suite of headers is visible. It is a Leatherman or Swiss Army knife of utilities.

Furthermore, given many of us use Oracle from a cloud-based site, launching API tests from there (AWS or Oracle Cloud or whereever) introduces several other layers of variables and potential issues such as:

  • Oracle-based Access Control Lists (ACL)
  • Oracle’s management of HTTPS / SSL certificates
  • Routing issues within the cloud environment
  • Cloud-based Access Control Lists
  • Cloud-based Firewalls and Web Application Firewalls

For example, our WAF denies stuff that looks like SQL – thinking it may be seeing an SQL injection attack. Scraping the Oracle’s 1980’s bias and the cloud-edge from early work saves those problems for later. Maybe it is all perfect, but in the early hours limit your variable and source of trouble.

While on Step 1 “Research and Explore”, read those API instructions carefully.

At Storm Petrel, we look for the ‘Long Pole in the Tent’ or that deep dark hole that will trap the effort. Somewhere in the API documentation and in your effort there is easy work and tough work. Something will hold the entire effort up – that dreaded long pole.

I tend to be suspicious of the authentication process. We tackle that first. We tried for months to nail down an API with PayPal and got utterly frustrated, so frustrated with shopped for another vendor. When we did, we researched the API documentation and access as an integral part of the purchase decision.

The cost of building and supporting an API ought to be part of the long-term analysis of selecting a remote partner. Many vendors provide a sandbox or developer’s access to the API. We pursue that before plonking down a credit card on recuring costs – well we did after the PayPal mess.

After years of writing and supporting API with Oracle PL/SQL and APEX, we state to ourselves – if it can be done with Postman it can be done with Oracle.

API Workflow and Table Architecture

With few exceptions, like my partner’s application to check the weather outside is window and the pool temperature, data from the outside ought to be buffered and isolated as it transitions into your curated databases. It is foreign data why trust it explicitly?

Our workflow and architecture goes as follows (when getting data from a remote site)

  1. Execute the HTTP GET
    1. Store the entire GET syntax in a table called API_STAGING
      1. API Name,
      2. Module name,
      3. Base URL
      4. Appended bit of the URL
      5. HTTP status code
      6. JSON response
      7. HTTP response
      8. Ok to Delete (a date field)
    2. A HTTP failure also stores the API staging data.
    3. This packages manages the API, and looping needed to fetch a large number of data rows, and error trapping.
  2. Parse the JSON data
    1. Using a loop, spin through the API_Staging data
    2. Query the new JSON data
    3. Merge data to intermediate Oracle table
      1. We create a table based on the JSON data
      2. We name fields as close as possible to the JSON names
      3. We add a few fields to dates of updates and add – basic management
    4. Mark rows in the API_Staging table that they can be deleted after the JSON data has been successfully placed into Oracle tables as needed
  3. Merge with the real data inside the application
    1. We can validate primary keys and foreign keys
    2. Confirm integrity of the data
    3. Isolate and report on any anomalous data

When updating remote data with a POST or PUT, the process goes in reverse. We progressively convert the data and log each action. When ready, we put the out-bound data (a blob or clob) in the API staging table.

Tables

  • API Staging – tracks the API activity, behaves like a log for API calls and responses
  • Intermediate Tables – tables that mimic the JSON data structures but using classic relational database tools. We mimic JSON field names where possible.
  • Production Tables – These tables become the destination for the API data (or the source if sending out with a RESTful POST or PUT)

Data Merge

Historically, I have been comparing a few key elements of data to test if a change exists. Some API, such as FEMA, includes their last update date and a hash value. I can easily compare these. After examining work done by the Oracle APEX development team, I noticed that they are doing an MD hash of an entire row of data. I lifted their procedure into some tests recently.

They converted dates/timestamps to varchar2. They also converted numbers to varchar2. The first time I used their process, I got snagged by null values in fields. I updated the procedure to include an NVL. Now, new data gets inserted. Data that appears the same, I run through hash it, then compare old and new.

function api_publicAPI_MD5 (
  P_ROW  in api_publicAPI%rowtype
) return varchar2
as
  l_procedure	  varchar2(100) := 'api_publicapi_staging.api_publicAPI_MD5';
begin  
  -- please convert all values to VARCHAR with consistent formats and manage nulls
  -- such as to_char("P_ACTION_DATE",'yyyymmddhh24:mi:ss'), etc
  return apex_util.get_hash(
    apex_t_varchar2(
     nvl(P_ROW.api_name,'^')
     ,nvl(P_ROW.description,'^')
     ,nvl(P_ROW.auth,'^')
     ,nvl(P_ROW.https,'^')
     ,nvl(P_ROW.cors,'^')
     ,nvl(P_ROW.link,'^')
     ,nvl(P_ROW.category,'^')
     ));		
end api_publicAPI_MD5;

Template for API

As mentioned above, I am publishing an API template written in PL/SQL on my Github site. I endeavored to find an API site that did not require registration or authentication so that this package could run easily. API sites that are fully open run the risk of denial-of-service attacks. I respect the need for authentication. I did not include much guidance on authentication. We’ve rarely encountered the same rules twice. We plod through that effort after registering an account. Sorry.

The files include:

1. API_SETUP (procedure)

This procedure establishes a couple of tables

  1. API_STAGING for the logging of API data in a native format
  2. API_PUBLICAPI mimics the Public API data structure. Sorry for the awkward name. Our standard involves starting with “API_” then the name of the destination. In this case the destination is “PublicAPI”

2. API_PUBLICAPI_PKG

Again sorry about the stupid name. Our standard is API_Destination_PKG so when the damn thing is called PublicAPI, it looks odd. This package does the HTTP GET and stores the resultant data in the API_STAGING table. The package includes standard code for all normal aspects of a RESTful API. It also references a proxy. In some cases, we use an Apache server to proxy our calls. This permits us to bypass the misery of the Oracle certificate management stuff.

3. API_PUBLICAPI_STAGING

This package page loops through the API_STAGING table to parse the JSON data into an Oracle table. It has a few tricks for querying JSON data.

Close

I will cover detailed elements in upcoming blogs. We’ve have learned and struggled with JSON queries. Some of the features are not well documented.

Have fun, be safe.

Christina Moore (@cmoore_sp)

Filed Under: Oracle APEX Tagged With: apex 20.2, json, oracle, oracle apex, restful, restful api

APEX Debug Messages – Standardize error collection, debugging and trace

19 May 2021 by Christina Moore

Abstract

In the releases of Oracle since 12.1, the Oracle APEX team has enhanced the features and benefits of APEX_DEBUG_MESSAGES. For a couple of years, we have been able to use Oracle Application Express (APEX) debug tools to replace logger and other ad-hoc error trapping tools. The benefits include:

  1. Improved portability of code;
  2. Streamlining the workflow involved with debugging production errors;
  3. Consistency between debug logging and error traps.

Teams often engage multiple tools including Logger created by Tyler Muth. Following the mantra “Lean Into APEX” incorporating native Oracle and APEX features can make our efforts more efficient and reduce the costs of application development. APEX Debug Messages include an API that permits developers employ the tool as a logging and debugging framework. Unlike Logger, and other related tools, there is nothing to install, given you have Oracle APEX 18.1 or a more recent version.

The process involves:

  1. Creating an APEX session
  2. Enabling APEX Debug at the desired level (error, warning, trace, etc)
  3. Include code to log values and context
  4. Trap PL/SQL errors
  5. Detach from APEX session
  6. Query trace and debug data from APEX_DEBUG_MESSAGES
APEX_DEBUG

References, Links, Acknowledgements

Thanks to Dimitri Gielis for his blog post and his work with  United Codes and APEX R&D and the great team at Oracle APEX.

Link for Oracle APEX_SESSION

https://docs.oracle.com/database/apex-18.1/AEAPI/APEX_SESSION.htm

Link for APEX_DEBUG API

https://docs.oracle.com/en/database/oracle/application-express/20.1/aeapi/APEX_DEBUG.html

Introduction

With the snowfalls of a heavy winter and the prolonged isolation of the pandemic, I decided to look at error handling at Storm Petrel. You have those tasks too. You say: “oh, on stormy winter’s day I’ll fix that”. That day same so I looked at how we handled error trapping. For our flagship application, we have 70 Oracle PL/SQL packages with approximately 55,000 lines of code.

My goal involved consistency and ease of support and even ease of transportation – moving code from one environment to another. During the years, developers left their mark on code. While I know that I (the boss) published standards, discussed standards for coding, and tried to enforce standards, my protestations failed. When looking I found three debug tools and a few ways of trapping errors. And my least favorite involved the suppression of all errors – EVIL.

Debug logging remains a fundamental tool and an important tool. When the code gets tough, on goes the debugging tools. You trap data to examine as you spin through loops and shovel data about. You know a section was tough when debug or logging-type phrases heavily salt the code. I have even been known to remove some debugging lines to obscure the fact I got so impossibly stuck… especially when the error was entirely obvious and utterly stupid.

Then comes that day when code runs in production and errors appear. A rate calculates wrong or that dreaded null data set fouls results. You rush in flip on the debugging tools, trap data, and stride through the code looking for the source of an error. It is here that speed and consistency matter. You may just scream allowed: Why didn’t someone leave me debugging tools in here? And then you find some debugging goes to one tool whilst others go to another tool.

Debugging tools strive to meet similar goals. You dribble lines into packages that trap data for review. The logging process ought to show you context and data. Where am I in the code? What are the values for variables or data columns? It is an X-ray machine or an MRI for code.

My recent criteria involve the following:

  1. Ease of use
  2. Ease of portability
  3. Consistency between debugging and error logging

Ease of Use

Like logger and other tools, trapping errors and tracing errors should involve lightweight code. One line of code is preferred. I also prefer to avoid bracketing code with IF statements, if possible. Ease of use also includes a single switch at the top of the code to turn on debugging and set the debugging level.

apex_debug.enable(p_level => g_debug_level);  -- (1 critical, 2 warn, 3 info, 6 trace) 

We can share code and exchange code more freely.

Ease of Portability

Internal tools such as we’ve used for years and external public domain tools all require some sort of installation – typically tables and a package. While logger is widely adopted, shared code may not compile until debugging statements are commented out or the debugging package is installed. By employing a native APEX feature set, code become more portable.

Consistency between debugging and error logging

By bringing debugging and error logging together into one repository, the speed and effort of chasing errors becomes faster and more streamlined. With one target, teams of developers learn to focus their efforts to querying apex_debug_messages instead of looking for APEX error here, then PL/SQL errors there, then debug tracing data in a third place.

Process for APEX_DEBUG

Because the apex_debug feature expects to operate within an APEX session, we must create a session before using the associated API. If our code is called from within an APEX session then we don’t need this extra step.

The Debug API expect to associate errors, warning, and tracing information within the context of an apex application. In summary, it needs the application identification, the page number, the username, and session data. We can shortcut these steps to gain the benefits of a unified target for errors and debugging data.

The process for employing APEX Debug into stand-alone PL/SQL code follows:

  1. Creating an APEX session
  2. Enabling APEX Debug at the desired level (error, warning, trace, etc)
  3. Include code to log values and context
  4. Trap PL/SQL errors
  5. Detach from APEX session
  6. Query trace and debug data from APEX_DEBUG_MESSAGES

Like logger, there are levels of messages that are captured. Since at least version 5.1 APEX provides us with a series of constants associated with the levels.

Level NameLevelConstant
Error/Critical Error1apex_debug.c_log_level_error
Error/Warning2apex_debug.c_log_level_warn
Informational (Debugging)4apex_debug.c_log_level_info
Application – procedure/functions5apex_debug.c_log_level_app_enter
Application – other messages6apex_debug.c_log_level_app_trace
APEX Engine – procedures/functions8apex_debug.c_log_level_engine_enter
APEX Engine – tracing9apex_debug.c_log_level_engine_trace
APEX Debug Levels

You may query these data through APEX_DEBUG_MESSAGES. Here is a sample query for looking at errors, warnings, and trace data:

select 
	message_level,
	application_id,
	page_id, 
	message_timestamp,
	message,
	apex_user,
	call_stack
from APEX_DEBUG_MESSAGES
where application_id = '197'
and message_level < 5
order by message_timestamp desc
;

Creating an APEX session

When creating an APEX session, either within Application Express, or by hand within a package, you will have a “session” – sorry for the rather obvious sounding statement. It matters for a couple of reasons. As mentioned, several API features do not work without the context of a session. Second, you can rapidly identify if you are within a session. You can test if a session exists by querying for the APP_ID. Frankly, you can query for session ID, page ID, and even username.

If APP_ID is null, then no session exists.

Because, I am using the APEX Debug API from outside of APEX, I need to decide where to put my errors and debug messages. I added a page to the most frequently use application in the schema/workspace. In our applications, Page 2 is an empty page called ERRORS. Others may opt to create a separate application to toss errors at. That’s you making your decisions.

Of the three elements of data to create an APEX session, two must exists. You MUST have:

  1. A valid Application ID
  2. A valid Page ID within the above application

The username can be something clever and easily recognized such as the package name. The following image shows the declarations at the top of an API package.

A developer can change the debug level and compile easily at the package level. There are two definitions of g_debug_level.

For a procedure within a function, the process of establishing a session looks like this:

The code (if you can copy successfully from HTML) is here:

l_procedure		varchar2(200):= 'package_pkg.procedure';
begin
-- prevent recursive/duplicative sessions
if v('APP_ID') is null then
  apex_session.create_session(p_app_id => g_app_id, p_page_id => g_page_id, p_username=>g_session_user);
end if;

Enabling APEX Debug at the desired level (error, warning, trace, etc)

We set the debug level normally at 2 (“Warning”) from the package constants. This means that errors (critical or warning) are both logged as expected. Any code tracing is not captured with level 2 (“warning”).

The syntax of the APEX_DEBUG messages involves the %s. The parameters are numbered p0 (zero) through p9. For each %s in the message, a parameter is substituted. Therefore, %s %s %s results in the concatenation of P0 and P1 and P2. For example, if message was “%s %s” and P0 was “Hello” and P1 was “World, the resultant message would be “Hello World”. Seems simple enough.

apex_debug.enable(p_level => g_debug_level);  -- (1 critical, 2 warn, 3 info, 6 trace)       
apex_debug.message(p_message => 'Debug enabled on %s', p0 => l_procedure); -- runs only if info or more

There are several ways of sending messages via the API to APEX_DEBUG_MESSAGES. I’ll list them here in rank order. Note that the parameters and calls are nearly identical except for the name.

  1. apex_debug.error
  2. apex_debug.warn
  3. apex_debug.info
  4. apex_debug.enter
  5. apex_debug.trace

The wildcard is “message”. The message procedure can be set to any level with the parameter p_level. The default value for level with message is the same as info (level 4). But message can be changed.

As an operational note, we do endeavor to pass the procedure name in early and we use the package name as the username. This help provide context to the debug messages. When sorting through thousands of messages in an active system, giving yourself tools to improve querying later will be a benefit.

Trap PL/SQL errors

One of the benefits of engaging APEX_DEBUG for both tracing and error trap is that the code for error trapping is just so easy. When using exception handling, the code can look like this:

-- blah, blah, blah
commit;
apex_session.detach; -- forces any debug/error messages to be written to APEX_DEBUG_MESSAGES
exception when others then
  apex_debug.error('Critical error %s', sqlerrm);		
  apex_session.detach;

In some of our API code or code that runs from scheduler, we also send an email to team members.

Detach from APEX session

Dimitri Gielis (@dgielis) wrote a blog on 11 June 2019 related to APEX Debug. His link is here:

https://dgielis.blogspot.com/2019/06/see-apex-debug-info-in-plsql-and-sql.html

In turn he acknowledges support from Christian Neumueller (@chrisneumueller). Dimitri states the following in his blog:

“APEX debug is buffering it’s output, to reduce the I/O overhead. Buffering is disabled for LEVEL9, but the other levels only write:
 – after 1000 records
 – at the end of request processing
 – when you detach the session”

So if you want or need immediate visibility, detach from your session and let the messages pop through. Without Dimitri’s (and Christian’s) guidance, I found APEX Debug a bit annoying. What’s wrong with cleaning up when done. It is what our mothers taught us to do, right. Clean up after yourself.

Query trace and debug data from APEX_DEBUG_MESSAGES

We have put the debug query in our customer service application as an interactive report. I also keep my basic query in Notepad++ as part of my toolkit. I tend to organize the columns my way. I find the select * version oddly organized. But of course, you can disagree with me too!

select 
	message_level,
	apex_user,	
	application_id,
	page_id, 
	message_timestamp,
	message,
	call_stack,
	elapsed_time,
	execution_time
from APEX_DEBUG_MESSAGES
where application_id = '106'
and message_level < 6
order by message_timestamp desc
;

There are other values including elapsed_time and execution_time that help evaluating performance issues. I normally ignore them when just plodding through a trace or debug process.

By using the package name as the username (“APEX_USER”) you can modify your WHERE clause to rapidly find your debug stuff.

We find that the debug reporting inside of APEX is less useful than running the queries by hand. Maybe in the future, more debug levels will be visible here. The APEX report is found on the Utility Page and called “Debug Messages”.

Then Cleanup

There are a number of API procedures for cleanup. My favorite has been to use the Utility /Debug messages page. This button seems to remove the junk and keep the true errors and warnings.

I do not like “Remove Debug by App”. It removes all messages including errors and warning that might deserve to be retained. Please look at the APEX documentation:

https://docs.oracle.com/en/database/oracle/application-express/20.1/aeapi/REMOVE_DEBUG_BY_VIEW-Procedure.html#GUID-4E27243D-92A2-44E7-8446-3ADADF185A65

The 2021 options are:

  • Remove Debug by Age
  • Remove Debug by App
  • Remove Debug by View
  • Remove Session Messages

The remove session messages requires your efforts exist within the same session that generated the messages. The API procedure does not allow passing the session ID through.

Christina Moore (@cmoore_sp)

Filed Under: Oracle APEX

  • « Go to Previous Page
  • Page 1
  • Page 2

Footer

We are Storm Petrel

We are a team of professionals dedicated to designing, building, hosting, and supporting enterprise-class software applications using Oracle APEX and PL/SQL

Learn more about us.

Powered by Oracle

PO Box 96, West Halifax VT 05358

sales@storm-petrel.com

  • #979 (no title)
  • GSA Schedule Information
  • Blogs
  • Privacy Policy

Copyright © 2025 ·Storm Petrel LLC

  • #979 (no title)
  • GSA Schedule Information
  • Blogs
  • Privacy Policy