Friday, August 30, 2024

Native PL/SQL Application to Capture Source Code and Configuration Data

I recently completed development and testing on the open source project ODBCapture.

ODBCapture is a native PL/SQL application that can be used to capture self-building scripts (source code and configuration data) for a database.

Existing tools like TOAD, PL/SQL Developer, and SQL*Developer can create “source code” scripts from an Oracle database. They can also create data load scripts from an Oracle database. What they cannot do is create a cohesive set of installation scripts that execute from a single “install.sql” script.

Existing database source code is handled by Liquibase and Flyway which are “diff” engines. These “diff” engines simply track changes to a database. Rarely is the source code from these “diff” engines ever used to create a database from nothing. Typically, the database source code from these “diff” engines require some existing database to get started.

ODBCapture is not a “diff” engine. ODBCapture is unique in its ability to create Oracle database installation scripts that can create different “flavors” of Oracle databases from a common set of source code. This installation occurs after an initialization to an empty database or PDB, such as:

  • Create Database …
  • Create Pluggable Database …
  • Drop Schema …

The published website is https://odbcapture.org

ODBCapture has been successfully tested on these platforms (Click the link).

ODBCapture has been successfully tested on these database object types:

  • Advanced Queue
  • Advanced Queue Table
  • Context
  • Database Link
  • Database Trigger
  • Directory
  • Foreign Key (psuedo-object)
  • Grant, Database Object (psuedo-object)
  • Grant, System Privilege (psuedo-object)
  • Host ACL (psuedo object)
  • PL/SQL Function
  • Java Source
  • Index
  • Materialized View
  • Materialized View Index
  • Materialized View Foreign Key
  • Materialized View Trigger
  • Package Body
  • Package Specification
  • PL/SQL Procedure
  • RAS ACL (psuedo-object)
  • Role
  • Scheduler Job
  • Scheduler Program
  • Scheduler Schedule
  • Sequence
  • Schema Trigger
  • Synonym
  • Table
  • Table Index
  • Table Foreign Key
  • Table Trigger
  • Type Body
  • Type Specification
  • User
  • View
  • View Foreign Key
  • View Trigger
  • Wallet ACL (psuedo-object)
  • XDB ACL (psuedo-object)

ODBCapture has been successfully tested on these database data types:

    • BLOB
    • CHAR
    • CLOB
    • DATE
    • INTERVAL_DAY_TO_SECOND
    • JSON
    • NUMBER
    • RAW
    • TIMESTAMP
    • TIMESTAMP_WITH_LOCAL_TZ
    • TIMESTAMP_WITH_TZ
    • VARCHAR2
    • XMLTYPE


    Monday, August 19, 2024

    GitHub Pages Jekyll Theme Google Analytics Update

     Let's breakdown that title:

    • GitHub Pages - a website hosting feature in GitHub
    • Jekyll Theme - a static site generator with built-in support for GitHub Pages.
    • Google Analytics Update - Google Analytics 4 is the next generation of Analytics which collects event-based data from both websites and apps
    If you are using Jekyll Themed GitHub Pages with Google Analytics and haven't found the simplest way to convert it from Universal Analytics to GA4, this is your "how to"...

    Jekyll Themes


    Universal Analytics was a simple setup before GA4.  The "_config.yml" included a "google_analytics:" location where the Universal Analytics ID would activate the needed website changes to begin data collection.  With GA4 the new Measurement ID replaces the Universal Analytics ID.  However, the website activation is different for GA4.  Fortunately, the creators of Jekyll Themes included a place to update this new website activation.

    Head Custom Google Analytics


    GutHub Pages has several supported Jekyll Themes.  Each of these themes has an "_includes/head-custom-google-analytics.html" file.

    Within the "_includes/head-custom-google-analytics.html" file is the website activation code:
    {% if site.google_analytics %}
      <script>
        (function(i,s,o,g,r,a,m){i['GoogleAnalyticsObject']=r;i[r]=i[r]||function(){
        (i[r].q=i[r].q||[]).push(arguments)},i[r].l=1*new Date();a=s.createElement(o),
                m=s.getElementsByTagName(o)[0];a.async=1;a.src=g;m.parentNode.insertBefore(a,m)
            })(window,document,'script','//www.google-analytics.com/analytics.js','ga');
        ga('create', '{{ site.google_analytics }}', 'auto');
        ga('send', 'pageview');
      </script>
    {% endif %}
        
    That code needs to be changed to the new data collection script:
    {% if site.google_analytics %}
      <!-- Google tag (gtag.js) -->
      <script async src="https://www.googletagmanager.com/gtag/js?id={{ site.google_analytics }}"></script>
      <script>
        window.dataLayer = window.dataLayer || [];
        function gtag(){dataLayer.push(arguments);}
        gtag('js', new Date());
        gtag('config', '{{ site.google_analytics }}');
      </script>
    {% endif %}
        

    Call Tree


    The "_includes/head-custom-google-analytics.html" file is called from the "_includes/head-custom.html" file by this line:
    {% include head-custom-google-analytics.html %}
    The "_includes/head-custom.html" file is called from the "_layouts/default.html" file by this line:
        {% include head-custom.html %}
    The "_layouts/default.html" file is the top level template in each Jekyll Theme.

    Procedure

    1. Create an "_includes" folder in the same folder as the "_config.yml" file.
    2. Copy the "_includes/head-custom-google-analytics.html" file for the correct theme from the list above.
    3. Paste the "_includes/head-custom-google-analytics.html" file into the new "_includes" folder.
    4. Modify the contents of the new "_includes/head-custom-google-analytics.html" file as described above.
    5. Push the changes and wait for GitHub to refresh the website.
    6. Confirm the new Google Analytics code in the header of the HTML source.

    Friday, July 5, 2024

    Oracle Cloud Autonomous Database File Not Found for TLS Connection


    The Attempt:

    I created an Autonomous Database on Oracle Cloud.  I used TLS (protocol=tcps) to successfully connect from a Windows SQL*Plus Instant Client to the database on Oracle Cloud.  However, I was unable to connect from an Ubuntu (Gnome) SQL*Plus Instant Client to the database on Oracle Cloud.


    The Error:

    Each connection attempt from Ubuntu SQL*Plus Instant Client to the Autonomous Database on Oracle Cloud threw a "file not found" error.


    Troubleshooting:

    I ran a "trace" in "sqlnet.ora" and found many "file not found" errors, all seemed to be related to Wallet, SSL, and Certificate Store.  In the trace file, I found SQL*Net was looking for the Certificate Store in "/etc/pki/tls/cert.pem".


    The Solution:

    I did not configure a wallet.  TLS uses CA Certificates instead of PKI certificates.

    I found a single file (PEM bundle) in "/etc/ssl/certs/ca-certificates.crt".  This was confirmed at Ubuntu's Website: Ubuntu root CA certificate trust store location

    I could not find a configuration in SQL*Net to change the location of the Certificate Store from "/etc/pki/tls/cert.pem" to "/etc/ssl/certs/ca-certficiates.crt".

    I did find moscicki at GitHub had a Symbolic Link that I was missing.  After I created the symbolic link, I was able to make the TLS connection from Oracle Instant Client for Linux x86-64 Version 23.4.0.0.0 to an Autonomous Database on Oracle Cloud without a wallet.

    Run these as root in Ubuntu:

    mkdir /etc/pki/tls
    ln -s /etc/ssl/certs/ca-certificates.crt /etc/pki/tls/cert.pem

    Tuesday, July 2, 2024

    Git Clone GLIBCXX_3.4.29 Not Found on Xubuntu


    The Attempt:


    This was my first attempt to clone a Github repository on Xubuntu (XFCE Desktop) using VSCode. I did not have this problem with Ubuntu (Gnome Desktop).
    • VSCode Version
      Version: 1.90.2
      Commit: 5437499feb04f7a586f677b155b039bc2b3669eb
      Date: 2024-06-18T22:33:48.698Z
      Electron: 29.4.0
      ElectronBuildId: 9728852
      Chromium: 122.0.6261.156
      Node.js: 20.9.0
      V8: 12.2.281.27-electron.0
      OS: Linux x64 5.15.0-113-generic snap
    • Git Version
      root# git --version
      git version 2.34.1
    • XUbuntu Version
      root# lsb_release -a
      No LSB modules are available.
      Distributor ID: Ubuntu
      Description: Ubuntu 22.04.4 LTS
      Release: 22.04
      Codename: jammy

      root# dpkg -l '*-desktop' | grep ^ii | grep 'ubuntu-'
      ii xubuntu-desktop 2.241 amd64 Xubuntu desktop system

    The Error Message:


    I received an error that included the following:
    git clone https://github.com/USER/REPO.git /home/USER/github/REPO --progress
    /snap/core20/current/lib/x86_64-linux-gnu/libstdc++.so.6: version `GLIBCXX_3.4.29' not found (required by /lib/x86_64-linux-gnu/libproxy.so.1)
    (USER is my username and REPO is the repository name.)

    Troubleshooting:


    I searched Google and found a myriad of solutions that included:
    • Update from "snap20" to "snap22"
    • Remove/Reinstall VSCode
    • Modify "snapcraft.yaml"
    I confirmed my "snap22" library contained the needed GLIBCXX_3.4.29 module. However, I didn't like any of those solutions because they required some underlying "tweeking" that has caused trouble in the past.

    The Solution:


    I stumbled onto this solution accidentally, and I don't know why it works. Run these commands using the "Git Bash Terminal" in VSCode
    cd /home/USER/github
    git clone https://github.com/USER/REPO.git REPO --progress

    Thursday, June 13, 2024

    ORA-12637: Packet Receive Failed - From Windows

    What's the Problem?

    I was intermittently getting an "ORA-12637: Packet Receive Failed" from Windows on a local network.

    • Running an Oracle Client on Windows.
    • Connecting to an Oracle Database on my Local Network.

    What's the Fix?

    Add an entry for the database server into the Windows "host" file.

    I have a DHCP server that hands-out specific IPv4 addresses to each server.  I added the IPv4 address and hostname to the "C:\Windows\System32\drivers\etc\hosts" file.

    Why Does this Work?

    The mechanism that was supposed to resolve IP addresses from hostnames wasn't working correctly.

    How to Troubleshoot?

    1. Create a "C:\app\oracle\network\admin\sqlnet.ora" file with the following entries.  (My version 21.3 ORACLE_HOME is "C:\app\oracle")
    DIAG_ADR_ENABLED=off
    LOG_DIRECTORY_CLIENT=C:\app\oracle\network\log
    TRACE_DIRECTORY_CLIENT=C:\app\oracle\network\traces
    # Trace Levels: off (0), user (4), admin (10), support (16)
    TRACE_LEVEL_CLIENT=admin
    2. Attempt to connect to the database.

    3. Find a new trace file in "C:\app\oracle\network\traces" folder.

    4. I found the following lines in the new trace file "cli_13580.trc":
    (13580) [13-JUN-2024 11:59:26:485] nttbnd2addr: looking up IP addr for host: hostname1
    (13580) [13-JUN-2024 11:59:26:485] snlinGetAddrInfo: entry
    (13580) [13-JUN-2024 11:59:27:512] snlinGetAddrInfo: getaddrinfo() failed with error 11001
    (13580) [13-JUN-2024 11:59:27:512] snlinGetAddrInfo: exit
    (13580) [13-JUN-2024 11:59:27:512] nttbnd2addr:  *** hostname lookup failure! ***
    (13580) [13-JUN-2024 11:59:27:512] nttbnd2addr: exit
    5. After adding the "hosts" file entry and I no longer received the ORA-12637 error.

    Friday, February 23, 2024

    Synergy with Wayland Using Chromium

     This is a rather specific issue.

    • Running Synergy 1.11 on Ubuntu.
    • Sharing Keyboard/Mouse from Ubuntu to Other Computers.
    • Upgraded Ubuntu to 22.04.
    • Synergy stopped sharing.

    The problem wasn't the network or the configuration because
    • Synergy successfully connected from all clients to the server.
    • The server showed all Synergy clients successfully connected.

    The odd thing is Chromium.  If I start Chromium on the Ubuntu 22.04 server, the keyboard sharing starts working.  If I stop Chromium on the Ubuntu 22.04 server, the keyboard sharing stops working.  I also noticed the log messages on the Synergy client including "LANGUAGE DEBUG" warnings for every keystroke while the cursor was moved to the client.  I assume Chromium setup some debugging channel that Synergy was able to use.  (I did not research this.)

    Takeaway 1:

    The root cause of the problem appears to be the Ubuntu 22.04 switch to Wayland.  Synergy doesn't support Wayland, but there are plans:

    Takeaway 2:

    Synergy has an interim fix.  Don't use Wayland with Ubuntu...

    Thursday, February 15, 2024

    Unit Testing a Thick Database

    Why Unit Test a Database?

    In the Java (OO) world, unit testing specifically avoids testing persistence.  The persistence engine (database) is mocked so it doesn't interfere with the purity of the testing.  Also, Java (OO) purists attempt to remove all logic from the database.  However, we typically don't find such purity in practice.


    What is a Thick Database?

    A Thick Database is a database with lots of logic built-in.  Dulcian.com does a great job of explaining what it is and why it's useful.  In practice, we tend to see some logic implemented in the database for a variety of reasons.


    What is a Unit in the Database?

    Most references define a "unit" as something like the smallest bit of logic in a system.  For this discussion, "unit" will refer to an interface.  Interfaces in an Oracle database include the following:

    • Packages
      • Procedures (Public and Internal)
      • Functions (Public and Internal)
    • Procedures
    • Functions
    • Types with Methods
    • Tables
      • Check Constraints
      • DML Triggers
    • View Triggers
    • Session Triggers
    • Database Triggers

    Is a Unit Test Framework Required?

    No.  However, a unit test framework does allow easier integration with CI/CD.  Examples of database unit test frameworks include:

    Alternatively, here is a bare-bones package that is useful when a framework is not available.

    create package simple_ut
    as
        C_FALSE           constant number := -1;
        C_TRUE            constant number := 0;
        C_BRIEF_OUTPUT    constant number := 0;
        C_NORMAL_OUTPUT   constant number := 1;
        C_VERBOSE_OUTPUT  constant number := 2;
        g_output_mode              number := C_NORMAL_OUTPUT;
        procedure ut_announce
                (in_str         in varchar2);
        function ut_assert
                (in_test_name   in varchar2
                ,in_test_string in varchar2)
            return number;
        procedure ut_assert
                (in_test_name   in varchar2
                ,in_test_string in varchar2);
        procedure demo;
    end simple_ut;
    /

    create package body simple_ut
    as
        procedure ut_announce
                (in_str         in varchar2)
        is
        begin
            dbms_output.put_line('');
            dbms_output.put_line('========================================');
            dbms_output.put_line('=== ' || $$PLSQL_UNIT || ' ' || in_str);
        end ut_announce;
        function ut_assert
                (in_test_name   in varchar2
                ,in_test_string in varchar2)
            return number
        is
            l_result   number;
        begin
            execute immediate 'BEGIN'                        || CHR(10) ||
                              'if ' || in_test_string        || CHR(10) ||
                              'then'                         || CHR(10) ||
                                 ':n1 := ' || C_TRUE  || ';' || CHR(10) ||
                              'else'                         || CHR(10) ||
                                 ':n1 := ' || C_FALSE || ';' || CHR(10) ||
                              'end if;'                      || CHR(10) ||
                              'END;' using out l_result;
            return l_result;
        end ut_assert;
        procedure ut_assert
                (in_test_name   in varchar2
                ,in_test_string in varchar2)
        is
        begin
            if ut_assert(in_test_name, in_test_string) = C_TRUE
            then
                if g_output_mode != C_BRIEF_OUTPUT
                then
                    DBMS_OUTPUT.PUT_LINE(' -) PASSED: ' || in_test_name);
                    if g_output_mode = C_VERBOSE_OUTPUT
                    then
                        DBMS_OUTPUT.PUT_LINE('             Details: "' ||
                            replace(in_test_string,CHR(10),';') || '"' );
                    end if;
                end if;
            else
                DBMS_OUTPUT.PUT_LINE('*** FAILED: ' || in_test_name);
                DBMS_OUTPUT.PUT_LINE('             Details: "' ||
                    replace(in_test_string,CHR(10),';') || '"' );
            end if;
        end ut_assert;
        procedure demo
        is
        begin
            ut_assert('This test passes', '1 = 1');
            ut_assert('This test fails', '0 = 1');
        end demo;
    end simple_ut;
    /


    Where to start?

    1. Pick something to test, like a procedure in a package.
    2. Define a "No Data" Test Case for that procedure (nothing for the procedure to do).
    3. Create a Unit Test Package to contain you Unit Test code.
    4. Create a procedure in the Unit Test Package called "no_data_to_process".
    5. Write the "no_data_to_process" procedure to call the procedure (from step 1) with without any test data setup.
    6. Add "ut_assert" procedure calls from the "simple_ut" package to document results.
    7. Run the "no_data_to_process" procedure and check results in DBMS_OUTPUT.
    8. Add more tests.

    Are there any Tips or Tricks?

    1. As part of the Test Case, test the data setup (if any) before the "actual" unit test.
    2. Run the "actual" unit test in a separate procedure with a dedicated Exception Handler.
    3. Create Unit Test Procedures/Functions to load Test Data into a single record variable.
    4. Use the single record variable to INSERT Test Data into a table.
    5. Setup Date/Time sensitive data for each run of a Unit Test.
    6. Create additional test customers/locations/actors as needed to re-run unit tests.
    7. Capture Sequence Generator values after the "actual" unit test.
    8. Include testing of logs after the "actual" unit test.
    9. Cleanup Test Data only when necessary to re-run unit tests.
    10. Develop a complete Test Data Set with a variety of Test Cases setup and ready for testing.

    What is the downside?

    If the database logic is unstable (constantly changing), it is very difficult to maintain the unit tests.  One strategy is to Unit Test only the functionality that is stable.

    What is the Upside?

    Several obvious, but sometimes unexpected outcomes of unit testing:
    • Testing code that serves no purpose results in removal of useless code.
    • Fault insertion testing results in much better error messages and error recovery.
    • Thinking through unit test cases results in simplification of overall implementation.