Wednesday, October 9, 2024

Oracle Connection to Docker Container Failed on Ubuntu 24.04

What Was the Problem?

After upgrading to Ubuntu 24.04, connections to an Oracle database in a Docker container failed.  These connections were working before the upgrade.

  • SQL*Plus: ORA-12637: Packet receive failed
  • "sqlnet.ora" (TRACE_LEVEL_CLIENT=user): Fatal NI connect error 12170
  • Oracle SQL Developer Extension for VSCode: Failed to connect

It Was Not AppArmor

I found many entries about Docker not starting after an Ubuntu upgrade.  While I don't know the root cause of this problem, no changes to AppArmor are needed to solve this problem.

What is the Interim Fix?

I found several fixes. I included links to websites.

SQL*Plus/SQL*Net:

Update "sqlnet.ora" File - Add "DISABLE_OOB=ON" to disable Out-of-Band communication.  Note: The location of the "sqlnet.ora" file can be based on the environment settings of ORACLE_HOME or TNS_ADMIN.

Oracle SQL Developer Extension for VSCode

Update "Advanced" Tab in "Update Connection" - Add a new name "oracle.net.disableOob" with a value of "true" to disable Out-of-Band communication.

What is the Root Cause?

I don't know.  I have not found it.  I assume there is a loss of functionality when Out-of-Band communication is not available for an Oracle client.  I will continue to follow this and add updates as I find them.

Friday, October 4, 2024

Unloading/Loading NLS (National Language Support) Data in Oracle

What is the problem?

Every Oracle database has 2 different character sets for "string" data types.  These character sets are defined whenever a database is created.

  • CHARACTER SET - This is the main character set for the database. It is set for specific environment and application needs.  This character set is normally used to store and handle all string data.
  • NATIONAL CHARACTER SET - This is the NLS (National Language Support) character set and is available for multi-lingual string data storage on all Oracle databases.  It can be set to one of these UNICODE character sets: "AL16UTF16" or "UTF8".
Oracle's DB Sample Schemas use the NLS character set for multi-lingual string data.  This data is loaded with SQL INSERT statements.  However, we have a need to unload and load this data to/from text files.

The problem is that text files will generally have string data for the database's main character set.  A different technique is needed to unload and load the NLS string data.  Oracle's DB Sample Schemas uses the UNISTR format to segregate NLS string data from string data that uses the database's main character set.

We will be using the UNISTR format as well.

How to Unload NLS String Data to a Text File?

The US7ASCII character set is a common subset of almost all main character sets for an Oracle database.  This is important because any NLS string data saved to a text file needs a common format with the database's main character set.  Also note the US7ASCII character set only uses the 7 LSBs (least significant bits) for each character.  Characters in this range can be found when Oracle's ASCII function returns decimal values between 0 and 127.  Additionally, the "\" character needs to be "escaped" with another "\" character.

We used the following function to encode database NLS string data to UNISTR format.  The resulting string data from this function will only contain US7ASCII characters and can be safely stored in a text file.

Monday, September 23, 2024

SQL*Plus Stopped Working in Ubuntu 24 (Noble Numbat)

After the upgrade to Ubuntu 24 (Noble Numbat), Oracle's SQL*Plus stopped working and started throwing this error:

sqlplus: error while loading shared libraries: libaio.so.1: cannot open shared object file: No such file or directory

Many thanks to https://askubuntu.com/users/367548/user3032965 for solving this problem in the comment at https://askubuntu.com/a/1526423.  For ubuntu 24.04 and later, the name for libaio1 has changed!  The solution is to create a symbolic link from the new name to the old name:

sudo ln -s /usr/lib/x86_64-linux-gnu/libaio.so.1t64 /usr/lib/libaio.so.1

In my situation, "libaio1t64" was already installed.  SQL*Plus started working again after I created the symbolic link.

Friday, August 30, 2024

Native PL/SQL Application to Capture Source Code and Configuration Data

I recently completed development and testing on the open source project ODBCapture.

ODBCapture is a native PL/SQL application that can be used to capture self-building scripts (source code and configuration data) for a database.

Existing tools like TOAD, PL/SQL Developer, and SQL*Developer can create “source code” scripts from an Oracle database. They can also create data load scripts from an Oracle database. What they cannot do is create a cohesive set of installation scripts that execute from a single “install.sql” script.

Existing database source code is handled by Liquibase and Flyway which are “diff” engines. These “diff” engines simply track changes to a database. Rarely is the source code from these “diff” engines ever used to create a database from nothing. Typically, the database source code from these “diff” engines require some existing database to get started.

ODBCapture is not a “diff” engine. ODBCapture is unique in its ability to create Oracle database installation scripts that can create different “flavors” of Oracle databases from a common set of source code. This installation occurs after an initialization to an empty database or PDB, such as:

  • Create Database …
  • Create Pluggable Database …
  • Drop Schema …

The published website is https://odbcapture.org

ODBCapture has been successfully tested on these platforms (Click the link).

ODBCapture has been successfully tested on these database object types:

  • Advanced Queue
  • Advanced Queue Table
  • Context
  • Database Link
  • Database Trigger
  • Directory
  • Foreign Key (psuedo-object)
  • Grant, Database Object (psuedo-object)
  • Grant, System Privilege (psuedo-object)
  • Host ACL (psuedo object)
  • PL/SQL Function
  • Java Source
  • Index
  • Materialized View
  • Materialized View Index
  • Materialized View Foreign Key
  • Materialized View Trigger
  • Package Body
  • Package Specification
  • PL/SQL Procedure
  • RAS ACL (psuedo-object)
  • Role
  • Scheduler Job
  • Scheduler Program
  • Scheduler Schedule
  • Sequence
  • Schema Trigger
  • Synonym
  • Table
  • Table Index
  • Table Foreign Key
  • Table Trigger
  • Type Body
  • Type Specification
  • User
  • View
  • View Foreign Key
  • View Trigger
  • Wallet ACL (psuedo-object)
  • XDB ACL (psuedo-object)

ODBCapture has been successfully tested on these database data types:

    • BLOB
    • CHAR
    • CLOB
    • DATE
    • INTERVAL_DAY_TO_SECOND
    • JSON
    • NUMBER
    • RAW
    • TIMESTAMP
    • TIMESTAMP_WITH_LOCAL_TZ
    • TIMESTAMP_WITH_TZ
    • VARCHAR2
    • XMLTYPE


    Monday, August 19, 2024

    GitHub Pages Jekyll Theme Google Analytics Update

     Let's breakdown that title:

    • GitHub Pages - a website hosting feature in GitHub
    • Jekyll Theme - a static site generator with built-in support for GitHub Pages.
    • Google Analytics Update - Google Analytics 4 is the next generation of Analytics which collects event-based data from both websites and apps
    If you are using Jekyll Themed GitHub Pages with Google Analytics and haven't found the simplest way to convert it from Universal Analytics to GA4, this is your "how to"...

    Jekyll Themes


    Universal Analytics was a simple setup before GA4.  The "_config.yml" included a "google_analytics:" location where the Universal Analytics ID would activate the needed website changes to begin data collection.  With GA4 the new Measurement ID replaces the Universal Analytics ID.  However, the website activation is different for GA4.  Fortunately, the creators of Jekyll Themes included a place to update this new website activation.

    Head Custom Google Analytics


    GutHub Pages has several supported Jekyll Themes.  Each of these themes has an "_includes/head-custom-google-analytics.html" file.

    Within the "_includes/head-custom-google-analytics.html" file is the website activation code:
    {% if site.google_analytics %}
      <script>
        (function(i,s,o,g,r,a,m){i['GoogleAnalyticsObject']=r;i[r]=i[r]||function(){
        (i[r].q=i[r].q||[]).push(arguments)},i[r].l=1*new Date();a=s.createElement(o),
                m=s.getElementsByTagName(o)[0];a.async=1;a.src=g;m.parentNode.insertBefore(a,m)
            })(window,document,'script','//www.google-analytics.com/analytics.js','ga');
        ga('create', '{{ site.google_analytics }}', 'auto');
        ga('send', 'pageview');
      </script>
    {% endif %}
        
    That code needs to be changed to the new data collection script:
    {% if site.google_analytics %}
      <!-- Google tag (gtag.js) -->
      <script async src="https://www.googletagmanager.com/gtag/js?id={{ site.google_analytics }}"></script>
      <script>
        window.dataLayer = window.dataLayer || [];
        function gtag(){dataLayer.push(arguments);}
        gtag('js', new Date());
        gtag('config', '{{ site.google_analytics }}');
      </script>
    {% endif %}
        

    Call Tree


    The "_includes/head-custom-google-analytics.html" file is called from the "_includes/head-custom.html" file by this line:
    {% include head-custom-google-analytics.html %}
    The "_includes/head-custom.html" file is called from the "_layouts/default.html" file by this line:
        {% include head-custom.html %}
    The "_layouts/default.html" file is the top level template in each Jekyll Theme.

    Procedure

    1. Create an "_includes" folder in the same folder as the "_config.yml" file.
    2. Copy the "_includes/head-custom-google-analytics.html" file for the correct theme from the list above.
    3. Paste the "_includes/head-custom-google-analytics.html" file into the new "_includes" folder.
    4. Modify the contents of the new "_includes/head-custom-google-analytics.html" file as described above.
    5. Push the changes and wait for GitHub to refresh the website.
    6. Confirm the new Google Analytics code in the header of the HTML source.

    Friday, July 5, 2024

    Oracle Cloud Autonomous Database File Not Found for TLS Connection


    The Attempt:

    I created an Autonomous Database on Oracle Cloud.  I used TLS (protocol=tcps) to successfully connect from a Windows SQL*Plus Instant Client to the database on Oracle Cloud.  However, I was unable to connect from an Ubuntu (Gnome) SQL*Plus Instant Client to the database on Oracle Cloud.


    The Error:

    Each connection attempt from Ubuntu SQL*Plus Instant Client to the Autonomous Database on Oracle Cloud threw a "file not found" error.


    Troubleshooting:

    I ran a "trace" in "sqlnet.ora" and found many "file not found" errors, all seemed to be related to Wallet, SSL, and Certificate Store.  In the trace file, I found SQL*Net was looking for the Certificate Store in "/etc/pki/tls/cert.pem".


    The Solution:

    I did not configure a wallet.  TLS uses CA Certificates instead of PKI certificates.

    I found a single file (PEM bundle) in "/etc/ssl/certs/ca-certificates.crt".  This was confirmed at Ubuntu's Website: Ubuntu root CA certificate trust store location

    I could not find a configuration in SQL*Net to change the location of the Certificate Store from "/etc/pki/tls/cert.pem" to "/etc/ssl/certs/ca-certficiates.crt".

    I did find moscicki at GitHub had a Symbolic Link that I was missing.  After I created the symbolic link, I was able to make the TLS connection from Oracle Instant Client for Linux x86-64 Version 23.4.0.0.0 to an Autonomous Database on Oracle Cloud without a wallet.

    Run these as root in Ubuntu:

    mkdir /etc/pki/tls
    ln -s /etc/ssl/certs/ca-certificates.crt /etc/pki/tls/cert.pem

    Tuesday, July 2, 2024

    Git Clone GLIBCXX_3.4.29 Not Found on Xubuntu


    The Attempt:


    This was my first attempt to clone a Github repository on Xubuntu (XFCE Desktop) using VSCode. I did not have this problem with Ubuntu (Gnome Desktop).
    • VSCode Version
      Version: 1.90.2
      Commit: 5437499feb04f7a586f677b155b039bc2b3669eb
      Date: 2024-06-18T22:33:48.698Z
      Electron: 29.4.0
      ElectronBuildId: 9728852
      Chromium: 122.0.6261.156
      Node.js: 20.9.0
      V8: 12.2.281.27-electron.0
      OS: Linux x64 5.15.0-113-generic snap
    • Git Version
      root# git --version
      git version 2.34.1
    • XUbuntu Version
      root# lsb_release -a
      No LSB modules are available.
      Distributor ID: Ubuntu
      Description: Ubuntu 22.04.4 LTS
      Release: 22.04
      Codename: jammy

      root# dpkg -l '*-desktop' | grep ^ii | grep 'ubuntu-'
      ii xubuntu-desktop 2.241 amd64 Xubuntu desktop system

    The Error Message:


    I received an error that included the following:
    git clone https://github.com/USER/REPO.git /home/USER/github/REPO --progress
    /snap/core20/current/lib/x86_64-linux-gnu/libstdc++.so.6: version `GLIBCXX_3.4.29' not found (required by /lib/x86_64-linux-gnu/libproxy.so.1)
    (USER is my username and REPO is the repository name.)

    Troubleshooting:


    I searched Google and found a myriad of solutions that included:
    • Update from "snap20" to "snap22"
    • Remove/Reinstall VSCode
    • Modify "snapcraft.yaml"
    I confirmed my "snap22" library contained the needed GLIBCXX_3.4.29 module. However, I didn't like any of those solutions because they required some underlying "tweeking" that has caused trouble in the past.

    The Solution:


    I stumbled onto this solution accidentally, and I don't know why it works. Run these commands using the "Git Bash Terminal" in VSCode
    cd /home/USER/github
    git clone https://github.com/USER/REPO.git REPO --progress

    Thursday, June 13, 2024

    ORA-12637: Packet Receive Failed - From Windows

    What's the Problem?

    I was intermittently getting an "ORA-12637: Packet Receive Failed" from Windows on a local network.

    • Running an Oracle Client on Windows.
    • Connecting to an Oracle Database on my Local Network.

    What's the Fix?

    Add an entry for the database server into the Windows "host" file.

    I have a DHCP server that hands-out specific IPv4 addresses to each server.  I added the IPv4 address and hostname to the "C:\Windows\System32\drivers\etc\hosts" file.

    Why Does this Work?

    The mechanism that was supposed to resolve IP addresses from hostnames wasn't working correctly.

    How to Troubleshoot?

    1. Create a "C:\app\oracle\network\admin\sqlnet.ora" file with the following entries.  (My version 21.3 ORACLE_HOME is "C:\app\oracle")
    DIAG_ADR_ENABLED=off
    LOG_DIRECTORY_CLIENT=C:\app\oracle\network\log
    TRACE_DIRECTORY_CLIENT=C:\app\oracle\network\traces
    # Trace Levels: off (0), user (4), admin (10), support (16)
    TRACE_LEVEL_CLIENT=admin
    2. Attempt to connect to the database.

    3. Find a new trace file in "C:\app\oracle\network\traces" folder.

    4. I found the following lines in the new trace file "cli_13580.trc":
    (13580) [13-JUN-2024 11:59:26:485] nttbnd2addr: looking up IP addr for host: hostname1
    (13580) [13-JUN-2024 11:59:26:485] snlinGetAddrInfo: entry
    (13580) [13-JUN-2024 11:59:27:512] snlinGetAddrInfo: getaddrinfo() failed with error 11001
    (13580) [13-JUN-2024 11:59:27:512] snlinGetAddrInfo: exit
    (13580) [13-JUN-2024 11:59:27:512] nttbnd2addr:  *** hostname lookup failure! ***
    (13580) [13-JUN-2024 11:59:27:512] nttbnd2addr: exit
    5. After adding the "hosts" file entry and I no longer received the ORA-12637 error.

    Friday, February 23, 2024

    Synergy with Wayland Using Chromium

     This is a rather specific issue.

    • Running Synergy 1.11 on Ubuntu.
    • Sharing Keyboard/Mouse from Ubuntu to Other Computers.
    • Upgraded Ubuntu to 22.04.
    • Synergy stopped sharing.

    The problem wasn't the network or the configuration because
    • Synergy successfully connected from all clients to the server.
    • The server showed all Synergy clients successfully connected.

    The odd thing is Chromium.  If I start Chromium on the Ubuntu 22.04 server, the keyboard sharing starts working.  If I stop Chromium on the Ubuntu 22.04 server, the keyboard sharing stops working.  I also noticed the log messages on the Synergy client including "LANGUAGE DEBUG" warnings for every keystroke while the cursor was moved to the client.  I assume Chromium setup some debugging channel that Synergy was able to use.  (I did not research this.)

    Takeaway 1:

    The root cause of the problem appears to be the Ubuntu 22.04 switch to Wayland.  Synergy doesn't support Wayland, but there are plans:

    Takeaway 2:

    Synergy has an interim fix.  Don't use Wayland with Ubuntu...

    Thursday, February 15, 2024

    Unit Testing a Thick Database

    Why Unit Test a Database?

    In the Java (OO) world, unit testing specifically avoids testing persistence.  The persistence engine (database) is mocked so it doesn't interfere with the purity of the testing.  Also, Java (OO) purists attempt to remove all logic from the database.  However, we typically don't find such purity in practice.


    What is a Thick Database?

    A Thick Database is a database with lots of logic built-in.  Dulcian.com does a great job of explaining what it is and why it's useful.  In practice, we tend to see some logic implemented in the database for a variety of reasons.


    What is a Unit in the Database?

    Most references define a "unit" as something like the smallest bit of logic in a system.  For this discussion, "unit" will refer to an interface.  Interfaces in an Oracle database include the following:

    • Packages
      • Procedures (Public and Internal)
      • Functions (Public and Internal)
    • Procedures
    • Functions
    • Types with Methods
    • Tables
      • Check Constraints
      • DML Triggers
    • View Triggers
    • Session Triggers
    • Database Triggers

    Is a Unit Test Framework Required?

    No.  However, a unit test framework does allow easier integration with CI/CD.  Examples of database unit test frameworks include:

    Alternatively, here is a bare-bones package that is useful when a framework is not available.

    create package simple_ut
    as
        C_FALSE           constant number := -1;
        C_TRUE            constant number := 0;
        C_BRIEF_OUTPUT    constant number := 0;
        C_NORMAL_OUTPUT   constant number := 1;
        C_VERBOSE_OUTPUT  constant number := 2;
        g_output_mode              number := C_NORMAL_OUTPUT;
        procedure ut_announce
                (in_str         in varchar2);
        function ut_assert
                (in_test_name   in varchar2
                ,in_test_string in varchar2)
            return number;
        procedure ut_assert
                (in_test_name   in varchar2
                ,in_test_string in varchar2);
        procedure demo;
    end simple_ut;
    /

    create package body simple_ut
    as
        procedure ut_announce
                (in_str         in varchar2)
        is
        begin
            dbms_output.put_line('');
            dbms_output.put_line('========================================');
            dbms_output.put_line('=== ' || $$PLSQL_UNIT || ' ' || in_str);
        end ut_announce;
        function ut_assert
                (in_test_name   in varchar2
                ,in_test_string in varchar2)
            return number
        is
            l_result   number;
        begin
            execute immediate 'BEGIN'                        || CHR(10) ||
                              'if ' || in_test_string        || CHR(10) ||
                              'then'                         || CHR(10) ||
                                 ':n1 := ' || C_TRUE  || ';' || CHR(10) ||
                              'else'                         || CHR(10) ||
                                 ':n1 := ' || C_FALSE || ';' || CHR(10) ||
                              'end if;'                      || CHR(10) ||
                              'END;' using out l_result;
            return l_result;
        end ut_assert;
        procedure ut_assert
                (in_test_name   in varchar2
                ,in_test_string in varchar2)
        is
        begin
            if ut_assert(in_test_name, in_test_string) = C_TRUE
            then
                if g_output_mode != C_BRIEF_OUTPUT
                then
                    DBMS_OUTPUT.PUT_LINE(' -) PASSED: ' || in_test_name);
                    if g_output_mode = C_VERBOSE_OUTPUT
                    then
                        DBMS_OUTPUT.PUT_LINE('             Details: "' ||
                            replace(in_test_string,CHR(10),';') || '"' );
                    end if;
                end if;
            else
                DBMS_OUTPUT.PUT_LINE('*** FAILED: ' || in_test_name);
                DBMS_OUTPUT.PUT_LINE('             Details: "' ||
                    replace(in_test_string,CHR(10),';') || '"' );
            end if;
        end ut_assert;
        procedure demo
        is
        begin
            ut_assert('This test passes', '1 = 1');
            ut_assert('This test fails', '0 = 1');
        end demo;
    end simple_ut;
    /


    Where to start?

    1. Pick something to test, like a procedure in a package.
    2. Define a "No Data" Test Case for that procedure (nothing for the procedure to do).
    3. Create a Unit Test Package to contain you Unit Test code.
    4. Create a procedure in the Unit Test Package called "no_data_to_process".
    5. Write the "no_data_to_process" procedure to call the procedure (from step 1) with without any test data setup.
    6. Add "ut_assert" procedure calls from the "simple_ut" package to document results.
    7. Run the "no_data_to_process" procedure and check results in DBMS_OUTPUT.
    8. Add more tests.

    Are there any Tips or Tricks?

    1. As part of the Test Case, test the data setup (if any) before the "actual" unit test.
    2. Run the "actual" unit test in a separate procedure with a dedicated Exception Handler.
    3. Create Unit Test Procedures/Functions to load Test Data into a single record variable.
    4. Use the single record variable to INSERT Test Data into a table.
    5. Setup Date/Time sensitive data for each run of a Unit Test.
    6. Create additional test customers/locations/actors as needed to re-run unit tests.
    7. Capture Sequence Generator values after the "actual" unit test.
    8. Include testing of logs after the "actual" unit test.
    9. Cleanup Test Data only when necessary to re-run unit tests.
    10. Develop a complete Test Data Set with a variety of Test Cases setup and ready for testing.

    What is the downside?

    If the database logic is unstable (constantly changing), it is very difficult to maintain the unit tests.  One strategy is to Unit Test only the functionality that is stable.

    What is the Upside?

    Several obvious, but sometimes unexpected outcomes of unit testing:
    • Testing code that serves no purpose results in removal of useless code.
    • Fault insertion testing results in much better error messages and error recovery.
    • Thinking through unit test cases results in simplification of overall implementation.

    Wednesday, January 31, 2024

    Monitoring Blogger with UptimeRobot

    I use UptimeRobot to monitor this site. The free version of UptimeRobot includes:
    • Monitor 50 different URLs.
    • Check each URL every 5 minutes.
    • Monitor HTTP or HTTPS.
    • Test for keyword in response, with upper/lower case option.
    • Limit response times.
    • Send Email alert on failure.
    • Access to their comprehensive monitor dashboard.
    Note: I am not paid by UptimeRobot. I do find it really useful.

    It turns out Blogger is suspicious of servers at UptimeRobot.  The result reeks havoc when trying to test for a keyword on a Blogger page.  Instead of sending the expected page, Blogger sends something like this to UptimeRobot:

    Our systems have detected unusual traffic from your computer network.  This page checks to see if it&#39;s really you sending the requests, and not a robot.

    Obviously, UptimeRobot is a robot.  So, I need a work-around.

    I use the "Full Response" link in UptimeRobot to review the entire response from Blogger.
    1. Go to UptimeRobot
    2. Setup a monitor on a Blogger URL and have it fail.
    3. Click on the new monitor in the navigation stack on the left.
    4. Click on "Details" at the far right of the "Incidents" region at the bottom.
    5. A "Full Response" button may be at the top of the pop-up window (Sometimes, it's not there).
    6. Right-Click and copy the URL for that button.
    7. Use something like "curl" or "wget" to pull that file for review.  (Windows Defender won't allow a browser to download the file in that link.)
    Inside the contents of the "Full Response" file, I look for something unique to my Blogger page and use it for a keyword.   Yes, it technically is not loading a page from this site.  However, it does confirm Blogger is responding and it is sending something that is unique to this site.

    Why did I make this post?  Blogger changed the response page and all my keyword monitors started failing.  I need this page for notes if/when it happens again...

    Note: In order to keep things easy for UptimeRobot, I check all my URLs on a 4 hour interval.  Checking these URLs 6 times a day is plenty and it reduces the total load on this free service.

    Thank you UptimeRobot.