sbuild (Debian sbuild) 0.85.11~bpo12+1 (31 August 2024) on debusine-worker-arm64-demeter-07.freexian.com
+==============================================================================+
| python-hmmlearn 0.3.0-5+bd1 (arm64) Fri, 04 Oct 2024 23:08:25 +0000 |
+==============================================================================+
Package: python-hmmlearn
Version: 0.3.0-5+bd1
Source Version: 0.3.0-5
Distribution: sid
Machine Architecture: arm64
Host Architecture: arm64
Build Architecture: arm64
Build Type: any
I: No tarballs found in /var/lib/debusine/worker/.cache/sbuild
Unpacking /var/lib/debusine/worker/system-images/820286/system.tar.xz to /tmp/tmp.sbuild.7z2ZvNpKRa...
I: NOTICE: Log filtering will replace 'sbuild-unshare-dummy-location' with '<<CHROOT>>'
+------------------------------------------------------------------------------+
| Chroot Setup Commands |
+------------------------------------------------------------------------------+
rm -f /etc/resolv.conf
----------------------
I: Finished running 'rm -f /etc/resolv.conf'.
Finished processing commands.
--------------------------------------------------------------------------------
Copying /tmp/debusine-fetch-exec-upload-d9qujoev/libpython3-all-dbg_3.12.6-1+debusine1_arm64.deb to /<<CHROOT>>...
Copying /tmp/debusine-fetch-exec-upload-d9qujoev/libpython3-all-dev_3.12.6-1+debusine1_arm64.deb to /<<CHROOT>>...
Copying /tmp/debusine-fetch-exec-upload-d9qujoev/libpython3-dbg_3.12.6-1+debusine1_arm64.deb to /<<CHROOT>>...
Copying /tmp/debusine-fetch-exec-upload-d9qujoev/libpython3-dev_3.12.6-1+debusine1_arm64.deb to /<<CHROOT>>...
Copying /tmp/debusine-fetch-exec-upload-d9qujoev/libpython3-stdlib_3.12.6-1+debusine1_arm64.deb to /<<CHROOT>>...
Copying /tmp/debusine-fetch-exec-upload-d9qujoev/python3-all-dbg_3.12.6-1+debusine1_arm64.deb to /<<CHROOT>>...
Copying /tmp/debusine-fetch-exec-upload-d9qujoev/python3-all-dev_3.12.6-1+debusine1_arm64.deb to /<<CHROOT>>...
Copying /tmp/debusine-fetch-exec-upload-d9qujoev/python3-all-venv_3.12.6-1+debusine1_arm64.deb to /<<CHROOT>>...
Copying /tmp/debusine-fetch-exec-upload-d9qujoev/python3-all_3.12.6-1+debusine1_arm64.deb to /<<CHROOT>>...
Copying /tmp/debusine-fetch-exec-upload-d9qujoev/python3-dbg_3.12.6-1+debusine1_arm64.deb to /<<CHROOT>>...
Copying /tmp/debusine-fetch-exec-upload-d9qujoev/python3-dev_3.12.6-1+debusine1_arm64.deb to /<<CHROOT>>...
Copying /tmp/debusine-fetch-exec-upload-d9qujoev/python3-full_3.12.6-1+debusine1_arm64.deb to /<<CHROOT>>...
Copying /tmp/debusine-fetch-exec-upload-d9qujoev/python3-minimal_3.12.6-1+debusine1_arm64.deb to /<<CHROOT>>...
Copying /tmp/debusine-fetch-exec-upload-d9qujoev/python3-nopie_3.12.6-1+debusine1_arm64.deb to /<<CHROOT>>...
Copying /tmp/debusine-fetch-exec-upload-d9qujoev/python3-venv_3.12.6-1+debusine1_arm64.deb to /<<CHROOT>>...
Copying /tmp/debusine-fetch-exec-upload-d9qujoev/python3_3.12.6-1+debusine1_arm64.deb to /<<CHROOT>>...
Copying /tmp/debusine-fetch-exec-upload-d9qujoev/2to3_3.12.6-1+debusine1_all.deb to /<<CHROOT>>...
Copying /tmp/debusine-fetch-exec-upload-d9qujoev/idle_3.12.6-1+debusine1_all.deb to /<<CHROOT>>...
Copying /tmp/debusine-fetch-exec-upload-d9qujoev/python3-doc_3.12.6-1+debusine1_all.deb to /<<CHROOT>>...
Copying /tmp/debusine-fetch-exec-upload-d9qujoev/python3-examples_3.12.6-1+debusine1_all.deb to /<<CHROOT>>...
Copying /tmp/debusine-fetch-exec-upload-d9qujoev/python3-numpy-dbgsym_1.26.4+ds-11+bootstrap1_arm64.deb to /<<CHROOT>>...
Copying /tmp/debusine-fetch-exec-upload-d9qujoev/python3-numpy_1.26.4+ds-11+bootstrap1_arm64.deb to /<<CHROOT>>...
Copying /tmp/debusine-fetch-exec-upload-d9qujoev/python3-scipy-dbgsym_1.13.1-5+nocheck1_arm64.deb to /<<CHROOT>>...
Copying /tmp/debusine-fetch-exec-upload-d9qujoev/python3-scipy_1.13.1-5+nocheck1_arm64.deb to /<<CHROOT>>...
Copying /tmp/debusine-fetch-exec-upload-d9qujoev/python-scipy-doc_1.13.1-5+nocheck1_all.deb to /<<CHROOT>>...
Copying /tmp/debusine-fetch-exec-upload-d9qujoev/python3-sklearn-lib-dbgsym_1.4.2+dfsg-6+debusine1_arm64.deb to /<<CHROOT>>...
Copying /tmp/debusine-fetch-exec-upload-d9qujoev/python3-sklearn-lib_1.4.2+dfsg-6+debusine1_arm64.deb to /<<CHROOT>>...
I: NOTICE: Log filtering will replace 'build/python-hmmlearn-5GsuRs/resolver-3onYRC' with '<<RESOLVERDIR>>'
+------------------------------------------------------------------------------+
| Update chroot |
+------------------------------------------------------------------------------+
Get:1 file:/build/python-hmmlearn-5GsuRs/resolver-HzbGeX/apt_archive ./ InRelease
Ign:1 file:/build/python-hmmlearn-5GsuRs/resolver-HzbGeX/apt_archive ./ InRelease
Get:2 file:/build/python-hmmlearn-5GsuRs/resolver-HzbGeX/apt_archive ./ Release [609 B]
Get:3 http://deb.debian.org/debian sid InRelease [202 kB]
Get:2 file:/build/python-hmmlearn-5GsuRs/resolver-HzbGeX/apt_archive ./ Release [609 B]
Get:4 file:/build/python-hmmlearn-5GsuRs/resolver-HzbGeX/apt_archive ./ Release.gpg
Ign:4 file:/build/python-hmmlearn-5GsuRs/resolver-HzbGeX/apt_archive ./ Release.gpg
Get:5 file:/build/python-hmmlearn-5GsuRs/resolver-HzbGeX/apt_archive ./ Packages [44.0 kB]
Get:6 http://deb.debian.org/debian sid/main arm64 Packages [9878 kB]
Fetched 10.1 MB in 1s (7776 kB/s)
Reading package lists...
Reading package lists...
Building dependency tree...
Reading state information...
Calculating upgrade...
The following NEW packages will be installed:
appstream libappstream5 libatomic1 libbrotli1 libcurl3t64-gnutls
libglib2.0-0t64 libgssapi-krb5-2 libk5crypto3 libkeyutils1 libkrb5-3
libkrb5support0 libldap-2.5-0 libnghttp2-14 libnghttp3-9 libngtcp2-16
libngtcp2-crypto-gnutls8 libpsl5t64 librtmp1 libsasl2-2 libsasl2-modules-db
libssh2-1t64 libstemmer0d libxmlb2 shared-mime-info
The following packages will be upgraded:
bsdextrautils bsdutils libblkid1 libbsd0 libmount1 libsmartcols1 libuuid1
libyaml-libyaml-perl lintian login mount tzdata util-linux
13 upgraded, 24 newly installed, 0 to remove and 0 not upgraded.
Need to get 8611 kB of archives.
After this operation, 22.0 MB of additional disk space will be used.
Get:1 http://deb.debian.org/debian sid/main arm64 bsdutils arm64 1:2.40.2-9 [104 kB]
Get:2 http://deb.debian.org/debian sid/main arm64 bsdextrautils arm64 2.40.2-9 [96.6 kB]
Get:3 http://deb.debian.org/debian sid/main arm64 libblkid1 arm64 2.40.2-9 [162 kB]
Get:4 http://deb.debian.org/debian sid/main arm64 libmount1 arm64 2.40.2-9 [190 kB]
Get:5 http://deb.debian.org/debian sid/main arm64 libsmartcols1 arm64 2.40.2-9 [135 kB]
Get:6 http://deb.debian.org/debian sid/main arm64 mount arm64 2.40.2-9 [152 kB]
Get:7 http://deb.debian.org/debian sid/main arm64 libuuid1 arm64 2.40.2-9 [35.5 kB]
Get:8 http://deb.debian.org/debian sid/main arm64 util-linux arm64 2.40.2-9 [1176 kB]
Get:9 http://deb.debian.org/debian sid/main arm64 login arm64 1:4.16.0-2+really2.40.2-9 [79.7 kB]
Get:10 http://deb.debian.org/debian sid/main arm64 tzdata all 2024b-1 [230 kB]
Get:11 http://deb.debian.org/debian sid/main arm64 libatomic1 arm64 14.2.0-5 [10.1 kB]
Get:12 http://deb.debian.org/debian sid/main arm64 libglib2.0-0t64 arm64 2.82.1-1 [1410 kB]
Get:13 http://deb.debian.org/debian sid/main arm64 shared-mime-info arm64 2.4-5 [755 kB]
Get:14 http://deb.debian.org/debian sid/main arm64 libbrotli1 arm64 1.1.0-2+b4 [292 kB]
Get:15 http://deb.debian.org/debian sid/main arm64 libkrb5support0 arm64 1.21.3-3 [32.1 kB]
Get:16 http://deb.debian.org/debian sid/main arm64 libk5crypto3 arm64 1.21.3-3 [80.8 kB]
Get:17 http://deb.debian.org/debian sid/main arm64 libkeyutils1 arm64 1.6.3-3 [9112 B]
Get:18 http://deb.debian.org/debian sid/main arm64 libkrb5-3 arm64 1.21.3-3 [310 kB]
Get:19 http://deb.debian.org/debian sid/main arm64 libgssapi-krb5-2 arm64 1.21.3-3 [126 kB]
Get:20 http://deb.debian.org/debian sid/main arm64 libsasl2-modules-db arm64 2.1.28+dfsg1-8 [20.0 kB]
Get:21 http://deb.debian.org/debian sid/main arm64 libsasl2-2 arm64 2.1.28+dfsg1-8 [55.4 kB]
Get:22 http://deb.debian.org/debian sid/main arm64 libldap-2.5-0 arm64 2.5.18+dfsg-3 [174 kB]
Get:23 http://deb.debian.org/debian sid/main arm64 libnghttp2-14 arm64 1.63.0-1 [71.2 kB]
Get:24 http://deb.debian.org/debian sid/main arm64 libnghttp3-9 arm64 1.4.0-1 [59.5 kB]
Get:25 http://deb.debian.org/debian sid/main arm64 libngtcp2-16 arm64 1.6.0-1 [112 kB]
Get:26 http://deb.debian.org/debian sid/main arm64 libngtcp2-crypto-gnutls8 arm64 1.6.0-1 [18.5 kB]
Get:27 http://deb.debian.org/debian sid/main arm64 libpsl5t64 arm64 0.21.2-1.1 [56.8 kB]
Get:28 http://deb.debian.org/debian sid/main arm64 librtmp1 arm64 2.4+20151223.gitfa8646d.1-2+b4 [56.7 kB]
Get:29 http://deb.debian.org/debian sid/main arm64 libssh2-1t64 arm64 1.11.0-7 [208 kB]
Get:30 http://deb.debian.org/debian sid/main arm64 libcurl3t64-gnutls arm64 8.10.1-1 [329 kB]
Get:31 http://deb.debian.org/debian sid/main arm64 libstemmer0d arm64 2.2.0-4+b1 [112 kB]
Get:32 http://deb.debian.org/debian sid/main arm64 libxmlb2 arm64 0.3.19-1 [58.6 kB]
Get:33 http://deb.debian.org/debian sid/main arm64 libappstream5 arm64 1.0.3-1 [207 kB]
Get:34 http://deb.debian.org/debian sid/main arm64 appstream arm64 1.0.3-1 [466 kB]
Get:35 http://deb.debian.org/debian sid/main arm64 libbsd0 arm64 0.12.2-2 [129 kB]
Get:36 http://deb.debian.org/debian sid/main arm64 lintian all 2.119.0 [1056 kB]
Get:37 http://deb.debian.org/debian sid/main arm64 libyaml-libyaml-perl arm64 0.902.0+ds-2 [33.6 kB]
debconf: delaying package configuration, since apt-utils is not installed
Fetched 8611 kB in 0s (57.9 MB/s)
(Reading database ...
(Reading database ... 5%
(Reading database ... 10%
(Reading database ... 15%
(Reading database ... 20%
(Reading database ... 25%
(Reading database ... 30%
(Reading database ... 35%
(Reading database ... 40%
(Reading database ... 45%
(Reading database ... 50%
(Reading database ... 55%
(Reading database ... 60%
(Reading database ... 65%
(Reading database ... 70%
(Reading database ... 75%
(Reading database ... 80%
(Reading database ... 85%
(Reading database ... 90%
(Reading database ... 95%
(Reading database ... 100%
(Reading database ... 16560 files and directories currently installed.)
Preparing to unpack .../bsdutils_1%3a2.40.2-9_arm64.deb ...
Unpacking bsdutils (1:2.40.2-9) over (1:2.40.2-8) ...
Setting up bsdutils (1:2.40.2-9) ...
(Reading database ...
(Reading database ... 5%
(Reading database ... 10%
(Reading database ... 15%
(Reading database ... 20%
(Reading database ... 25%
(Reading database ... 30%
(Reading database ... 35%
(Reading database ... 40%
(Reading database ... 45%
(Reading database ... 50%
(Reading database ... 55%
(Reading database ... 60%
(Reading database ... 65%
(Reading database ... 70%
(Reading database ... 75%
(Reading database ... 80%
(Reading database ... 85%
(Reading database ... 90%
(Reading database ... 95%
(Reading database ... 100%
(Reading database ... 16560 files and directories currently installed.)
Preparing to unpack .../bsdextrautils_2.40.2-9_arm64.deb ...
Unpacking bsdextrautils (2.40.2-9) over (2.40.2-8) ...
Preparing to unpack .../libblkid1_2.40.2-9_arm64.deb ...
Unpacking libblkid1:arm64 (2.40.2-9) over (2.40.2-8) ...
Setting up libblkid1:arm64 (2.40.2-9) ...
(Reading database ...
(Reading database ... 5%
(Reading database ... 10%
(Reading database ... 15%
(Reading database ... 20%
(Reading database ... 25%
(Reading database ... 30%
(Reading database ... 35%
(Reading database ... 40%
(Reading database ... 45%
(Reading database ... 50%
(Reading database ... 55%
(Reading database ... 60%
(Reading database ... 65%
(Reading database ... 70%
(Reading database ... 75%
(Reading database ... 80%
(Reading database ... 85%
(Reading database ... 90%
(Reading database ... 95%
(Reading database ... 100%
(Reading database ... 16560 files and directories currently installed.)
Preparing to unpack .../libmount1_2.40.2-9_arm64.deb ...
Unpacking libmount1:arm64 (2.40.2-9) over (2.40.2-8) ...
Setting up libmount1:arm64 (2.40.2-9) ...
(Reading database ...
(Reading database ... 5%
(Reading database ... 10%
(Reading database ... 15%
(Reading database ... 20%
(Reading database ... 25%
(Reading database ... 30%
(Reading database ... 35%
(Reading database ... 40%
(Reading database ... 45%
(Reading database ... 50%
(Reading database ... 55%
(Reading database ... 60%
(Reading database ... 65%
(Reading database ... 70%
(Reading database ... 75%
(Reading database ... 80%
(Reading database ... 85%
(Reading database ... 90%
(Reading database ... 95%
(Reading database ... 100%
(Reading database ... 16560 files and directories currently installed.)
Preparing to unpack .../libsmartcols1_2.40.2-9_arm64.deb ...
Unpacking libsmartcols1:arm64 (2.40.2-9) over (2.40.2-8) ...
Setting up libsmartcols1:arm64 (2.40.2-9) ...
(Reading database ...
(Reading database ... 5%
(Reading database ... 10%
(Reading database ... 15%
(Reading database ... 20%
(Reading database ... 25%
(Reading database ... 30%
(Reading database ... 35%
(Reading database ... 40%
(Reading database ... 45%
(Reading database ... 50%
(Reading database ... 55%
(Reading database ... 60%
(Reading database ... 65%
(Reading database ... 70%
(Reading database ... 75%
(Reading database ... 80%
(Reading database ... 85%
(Reading database ... 90%
(Reading database ... 95%
(Reading database ... 100%
(Reading database ... 16560 files and directories currently installed.)
Preparing to unpack .../mount_2.40.2-9_arm64.deb ...
Unpacking mount (2.40.2-9) over (2.40.2-8) ...
Preparing to unpack .../libuuid1_2.40.2-9_arm64.deb ...
Unpacking libuuid1:arm64 (2.40.2-9) over (2.40.2-8) ...
Setting up libuuid1:arm64 (2.40.2-9) ...
(Reading database ...
(Reading database ... 5%
(Reading database ... 10%
(Reading database ... 15%
(Reading database ... 20%
(Reading database ... 25%
(Reading database ... 30%
(Reading database ... 35%
(Reading database ... 40%
(Reading database ... 45%
(Reading database ... 50%
(Reading database ... 55%
(Reading database ... 60%
(Reading database ... 65%
(Reading database ... 70%
(Reading database ... 75%
(Reading database ... 80%
(Reading database ... 85%
(Reading database ... 90%
(Reading database ... 95%
(Reading database ... 100%
(Reading database ... 16560 files and directories currently installed.)
Preparing to unpack .../util-linux_2.40.2-9_arm64.deb ...
Unpacking util-linux (2.40.2-9) over (2.40.2-8) ...
Setting up util-linux (2.40.2-9) ...
(Reading database ...
(Reading database ... 5%
(Reading database ... 10%
(Reading database ... 15%
(Reading database ... 20%
(Reading database ... 25%
(Reading database ... 30%
(Reading database ... 35%
(Reading database ... 40%
(Reading database ... 45%
(Reading database ... 50%
(Reading database ... 55%
(Reading database ... 60%
(Reading database ... 65%
(Reading database ... 70%
(Reading database ... 75%
(Reading database ... 80%
(Reading database ... 85%
(Reading database ... 90%
(Reading database ... 95%
(Reading database ... 100%
(Reading database ... 16560 files and directories currently installed.)
Preparing to unpack .../00-login_1%3a4.16.0-2+really2.40.2-9_arm64.deb ...
Unpacking login (1:4.16.0-2+really2.40.2-9) over (1:4.16.0-2+really2.40.2-8) ...
Preparing to unpack .../01-tzdata_2024b-1_all.deb ...
Unpacking tzdata (2024b-1) over (2024a-4) ...
Selecting previously unselected package libatomic1:arm64.
Preparing to unpack .../02-libatomic1_14.2.0-5_arm64.deb ...
Unpacking libatomic1:arm64 (14.2.0-5) ...
Selecting previously unselected package libglib2.0-0t64:arm64.
Preparing to unpack .../03-libglib2.0-0t64_2.82.1-1_arm64.deb ...
Unpacking libglib2.0-0t64:arm64 (2.82.1-1) ...
Selecting previously unselected package shared-mime-info.
Preparing to unpack .../04-shared-mime-info_2.4-5_arm64.deb ...
Unpacking shared-mime-info (2.4-5) ...
Selecting previously unselected package libbrotli1:arm64.
Preparing to unpack .../05-libbrotli1_1.1.0-2+b4_arm64.deb ...
Unpacking libbrotli1:arm64 (1.1.0-2+b4) ...
Selecting previously unselected package libkrb5support0:arm64.
Preparing to unpack .../06-libkrb5support0_1.21.3-3_arm64.deb ...
Unpacking libkrb5support0:arm64 (1.21.3-3) ...
Selecting previously unselected package libk5crypto3:arm64.
Preparing to unpack .../07-libk5crypto3_1.21.3-3_arm64.deb ...
Unpacking libk5crypto3:arm64 (1.21.3-3) ...
Selecting previously unselected package libkeyutils1:arm64.
Preparing to unpack .../08-libkeyutils1_1.6.3-3_arm64.deb ...
Unpacking libkeyutils1:arm64 (1.6.3-3) ...
Selecting previously unselected package libkrb5-3:arm64.
Preparing to unpack .../09-libkrb5-3_1.21.3-3_arm64.deb ...
Unpacking libkrb5-3:arm64 (1.21.3-3) ...
Selecting previously unselected package libgssapi-krb5-2:arm64.
Preparing to unpack .../10-libgssapi-krb5-2_1.21.3-3_arm64.deb ...
Unpacking libgssapi-krb5-2:arm64 (1.21.3-3) ...
Selecting previously unselected package libsasl2-modules-db:arm64.
Preparing to unpack .../11-libsasl2-modules-db_2.1.28+dfsg1-8_arm64.deb ...
Unpacking libsasl2-modules-db:arm64 (2.1.28+dfsg1-8) ...
Selecting previously unselected package libsasl2-2:arm64.
Preparing to unpack .../12-libsasl2-2_2.1.28+dfsg1-8_arm64.deb ...
Unpacking libsasl2-2:arm64 (2.1.28+dfsg1-8) ...
Selecting previously unselected package libldap-2.5-0:arm64.
Preparing to unpack .../13-libldap-2.5-0_2.5.18+dfsg-3_arm64.deb ...
Unpacking libldap-2.5-0:arm64 (2.5.18+dfsg-3) ...
Selecting previously unselected package libnghttp2-14:arm64.
Preparing to unpack .../14-libnghttp2-14_1.63.0-1_arm64.deb ...
Unpacking libnghttp2-14:arm64 (1.63.0-1) ...
Selecting previously unselected package libnghttp3-9:arm64.
Preparing to unpack .../15-libnghttp3-9_1.4.0-1_arm64.deb ...
Unpacking libnghttp3-9:arm64 (1.4.0-1) ...
Selecting previously unselected package libngtcp2-16:arm64.
Preparing to unpack .../16-libngtcp2-16_1.6.0-1_arm64.deb ...
Unpacking libngtcp2-16:arm64 (1.6.0-1) ...
Selecting previously unselected package libngtcp2-crypto-gnutls8:arm64.
Preparing to unpack .../17-libngtcp2-crypto-gnutls8_1.6.0-1_arm64.deb ...
Unpacking libngtcp2-crypto-gnutls8:arm64 (1.6.0-1) ...
Selecting previously unselected package libpsl5t64:arm64.
Preparing to unpack .../18-libpsl5t64_0.21.2-1.1_arm64.deb ...
Unpacking libpsl5t64:arm64 (0.21.2-1.1) ...
Selecting previously unselected package librtmp1:arm64.
Preparing to unpack .../19-librtmp1_2.4+20151223.gitfa8646d.1-2+b4_arm64.deb ...
Unpacking librtmp1:arm64 (2.4+20151223.gitfa8646d.1-2+b4) ...
Selecting previously unselected package libssh2-1t64:arm64.
Preparing to unpack .../20-libssh2-1t64_1.11.0-7_arm64.deb ...
Unpacking libssh2-1t64:arm64 (1.11.0-7) ...
Selecting previously unselected package libcurl3t64-gnutls:arm64.
Preparing to unpack .../21-libcurl3t64-gnutls_8.10.1-1_arm64.deb ...
Unpacking libcurl3t64-gnutls:arm64 (8.10.1-1) ...
Selecting previously unselected package libstemmer0d:arm64.
Preparing to unpack .../22-libstemmer0d_2.2.0-4+b1_arm64.deb ...
Unpacking libstemmer0d:arm64 (2.2.0-4+b1) ...
Selecting previously unselected package libxmlb2:arm64.
Preparing to unpack .../23-libxmlb2_0.3.19-1_arm64.deb ...
Unpacking libxmlb2:arm64 (0.3.19-1) ...
Selecting previously unselected package libappstream5:arm64.
Preparing to unpack .../24-libappstream5_1.0.3-1_arm64.deb ...
Unpacking libappstream5:arm64 (1.0.3-1) ...
Selecting previously unselected package appstream.
Preparing to unpack .../25-appstream_1.0.3-1_arm64.deb ...
Unpacking appstream (1.0.3-1) ...
Preparing to unpack .../26-libbsd0_0.12.2-2_arm64.deb ...
Unpacking libbsd0:arm64 (0.12.2-2) over (0.12.2-1) ...
Preparing to unpack .../27-lintian_2.119.0_all.deb ...
Unpacking lintian (2.119.0) over (2.118.2) ...
Preparing to unpack .../28-libyaml-libyaml-perl_0.902.0+ds-2_arm64.deb ...
Unpacking libyaml-libyaml-perl (0.902.0+ds-2) over (0.902.0+ds-1) ...
Setting up libkeyutils1:arm64 (1.6.3-3) ...
Setting up bsdextrautils (2.40.2-9) ...
Setting up libbrotli1:arm64 (1.1.0-2+b4) ...
Setting up libyaml-libyaml-perl (0.902.0+ds-2) ...
Setting up libpsl5t64:arm64 (0.21.2-1.1) ...
Setting up libnghttp2-14:arm64 (1.63.0-1) ...
Setting up libkrb5support0:arm64 (1.21.3-3) ...
Setting up libsasl2-modules-db:arm64 (2.1.28+dfsg1-8) ...
Setting up tzdata (2024b-1) ...
Current default time zone: 'Etc/UTC'
Local time is now: Fri Oct 4 23:09:00 UTC 2024.
Universal Time is now: Fri Oct 4 23:09:00 UTC 2024.
Run 'dpkg-reconfigure tzdata' if you wish to change it.
Setting up librtmp1:arm64 (2.4+20151223.gitfa8646d.1-2+b4) ...
Setting up libatomic1:arm64 (14.2.0-5) ...
Setting up libk5crypto3:arm64 (1.21.3-3) ...
Setting up libsasl2-2:arm64 (2.1.28+dfsg1-8) ...
Setting up libnghttp3-9:arm64 (1.4.0-1) ...
Setting up mount (2.40.2-9) ...
Setting up libngtcp2-16:arm64 (1.6.0-1) ...
Setting up libkrb5-3:arm64 (1.21.3-3) ...
Setting up libstemmer0d:arm64 (2.2.0-4+b1) ...
Setting up libssh2-1t64:arm64 (1.11.0-7) ...
Setting up libbsd0:arm64 (0.12.2-2) ...
Setting up libngtcp2-crypto-gnutls8:arm64 (1.6.0-1) ...
Setting up login (1:4.16.0-2+really2.40.2-9) ...
Setting up libldap-2.5-0:arm64 (2.5.18+dfsg-3) ...
Setting up libglib2.0-0t64:arm64 (2.82.1-1) ...
No schema files found: doing nothing.
Setting up shared-mime-info (2.4-5) ...
Warning: program compiled against libxml 212 using older 209
Setting up libgssapi-krb5-2:arm64 (1.21.3-3) ...
Setting up libxmlb2:arm64 (0.3.19-1) ...
Setting up libcurl3t64-gnutls:arm64 (8.10.1-1) ...
Setting up libappstream5:arm64 (1.0.3-1) ...
Setting up appstream (1.0.3-1) ...
✔ Metadata cache was updated successfully.
Setting up lintian (2.119.0) ...
Processing triggers for libc-bin (2.40-3) ...
Processing triggers for man-db (2.13.0-1) ...
+------------------------------------------------------------------------------+
| Fetch source files |
+------------------------------------------------------------------------------+
Local sources
-------------
/tmp/debusine-fetch-exec-upload-d9qujoev/python-hmmlearn_0.3.0-5.dsc exists in /tmp/debusine-fetch-exec-upload-d9qujoev; copying to chroot
I: NOTICE: Log filtering will replace 'build/python-hmmlearn-5GsuRs/python-hmmlearn-0.3.0' with '<<PKGBUILDDIR>>'
I: NOTICE: Log filtering will replace 'build/python-hmmlearn-5GsuRs' with '<<BUILDDIR>>'
+------------------------------------------------------------------------------+
| Install package build dependencies |
+------------------------------------------------------------------------------+
Setup apt archive
-----------------
Merged Build-Depends: debhelper-compat (= 13), dh-sequence-python3, pybuild-plugin-pyproject, python3-setuptools, python3-setuptools-scm, python3-all-dev, python3-pybind11, python3-pytest, python3-numpy, python3-sklearn, build-essential, fakeroot, dumb-init
Filtered Build-Depends: debhelper-compat (= 13), dh-sequence-python3, pybuild-plugin-pyproject, python3-setuptools, python3-setuptools-scm, python3-all-dev, python3-pybind11, python3-pytest, python3-numpy, python3-sklearn, build-essential, fakeroot, dumb-init
dpkg-deb: building package 'sbuild-build-depends-main-dummy' in '/<<RESOLVERDIR>>/apt_archive/sbuild-build-depends-main-dummy.deb'.
Ign:1 copy:/<<RESOLVERDIR>>/apt_archive ./ InRelease
Get:2 copy:/<<RESOLVERDIR>>/apt_archive ./ Release [609 B]
Ign:3 copy:/<<RESOLVERDIR>>/apt_archive ./ Release.gpg
Get:4 copy:/<<RESOLVERDIR>>/apt_archive ./ Sources [842 B]
Get:5 copy:/<<RESOLVERDIR>>/apt_archive ./ Packages [834 B]
Fetched 2285 B in 0s (216 kB/s)
Reading package lists...
Get:1 file:/<<BUILDDIR>>/resolver-HzbGeX/apt_archive ./ InRelease
Ign:1 file:/<<BUILDDIR>>/resolver-HzbGeX/apt_archive ./ InRelease
Get:2 file:/<<BUILDDIR>>/resolver-HzbGeX/apt_archive ./ Release [609 B]
Get:2 file:/<<BUILDDIR>>/resolver-HzbGeX/apt_archive ./ Release [609 B]
Get:3 file:/<<BUILDDIR>>/resolver-HzbGeX/apt_archive ./ Release.gpg
Ign:3 file:/<<BUILDDIR>>/resolver-HzbGeX/apt_archive ./ Release.gpg
Reading package lists...
Reading package lists...
Install main build dependencies (apt-based resolver)
----------------------------------------------------
Installing build dependencies
Reading package lists...
Building dependency tree...
Reading state information...
The following additional packages will be installed:
autoconf automake autopoint autotools-dev build-essential cpp cpp-14
cpp-14-aarch64-linux-gnu cpp-aarch64-linux-gnu debhelper dh-autoreconf
dh-python dh-strip-nondeterminism dumb-init dwz fakeroot g++ g++-14
g++-14-aarch64-linux-gnu g++-aarch64-linux-gnu gcc gcc-14
gcc-14-aarch64-linux-gnu gcc-aarch64-linux-gnu libasan8 libblas3
libc-dev-bin libc6-dev libcc1-0 libcrypt-dev libdebhelper-perl libelf1t64
libexpat1 libexpat1-dev libfakeroot libfile-stripnondeterminism-perl
libgcc-14-dev libgfortran5 libhwasan0 libisl23 libitm1 libjs-jquery
libjs-sphinxdoc libjs-underscore liblapack3 liblbfgsb0 liblsan0 libmpc3
libmpfr6 libncursesw6 libnsl2 libproc2-0 libpython3-all-dev libpython3-dev
libpython3-stdlib libpython3.12-dev libpython3.12-minimal
libpython3.12-stdlib libpython3.12t64 libpython3.13 libpython3.13-dev
libpython3.13-minimal libpython3.13-stdlib libstdc++-14-dev libtirpc-common
libtirpc3t64 libtool libtsan2 libubsan1 linux-libc-dev m4 media-types
po-debconf procps pybind11-dev pybuild-plugin-pyproject python3 python3-all
python3-all-dev python3-autocommand python3-build python3-decorator
python3-dev python3-inflect python3-iniconfig python3-installer
python3-jaraco.context python3-jaraco.functools python3-joblib
python3-minimal python3-more-itertools python3-numpy python3-packaging
python3-pkg-resources python3-pluggy python3-pybind11
python3-pyproject-hooks python3-pytest python3-scipy python3-setuptools
python3-setuptools-scm python3-sklearn python3-sklearn-lib
python3-threadpoolctl python3-toml python3-typeguard
python3-typing-extensions python3-wheel python3-zipp python3.12
python3.12-dev python3.12-minimal python3.13 python3.13-dev
python3.13-minimal rpcsvc-proto zlib1g-dev
Suggested packages:
autoconf-archive gnu-standards autoconf-doc cpp-doc gcc-14-locales
cpp-14-doc dh-make flit gcc-14-doc gcc-multilib manpages-dev flex bison gdb
gcc-doc gdb-aarch64-linux-gnu libc-devtools glibc-doc libstdc++-14-doc
libtool-doc gfortran | fortran95-compiler gcj-jdk m4-doc libmail-box-perl
pybind11-doc python3-doc python3-tk python3-venv python3-pip
python-build-doc python-installer-doc python-joblib-doc gfortran
python-numpy-doc python-scipy-doc python-setuptools-doc python3-dap
python-sklearn-doc ipython3 python3.12-venv python3.12-doc binfmt-support
python3.13-venv python3.13-doc
Recommended packages:
manpages manpages-dev libarchive-cpio-perl javascript-common libgpm2
libltdl-dev libmail-sendmail-perl psmisc linux-sysctl-defaults libeigen3-dev
python3-simplejson python3-psutil python3-pygments python3-pil
python3-matplotlib
The following NEW packages will be installed:
autoconf automake autopoint autotools-dev build-essential cpp cpp-14
cpp-14-aarch64-linux-gnu cpp-aarch64-linux-gnu debhelper dh-autoreconf
dh-python dh-strip-nondeterminism dumb-init dwz fakeroot g++ g++-14
g++-14-aarch64-linux-gnu g++-aarch64-linux-gnu gcc gcc-14
gcc-14-aarch64-linux-gnu gcc-aarch64-linux-gnu libasan8 libblas3
libc-dev-bin libc6-dev libcc1-0 libcrypt-dev libdebhelper-perl libelf1t64
libexpat1 libexpat1-dev libfakeroot libfile-stripnondeterminism-perl
libgcc-14-dev libgfortran5 libhwasan0 libisl23 libitm1 libjs-jquery
libjs-sphinxdoc libjs-underscore liblapack3 liblbfgsb0 liblsan0 libmpc3
libmpfr6 libncursesw6 libnsl2 libproc2-0 libpython3-all-dev libpython3-dev
libpython3-stdlib libpython3.12-dev libpython3.12-minimal
libpython3.12-stdlib libpython3.12t64 libpython3.13 libpython3.13-dev
libpython3.13-minimal libpython3.13-stdlib libstdc++-14-dev libtirpc-common
libtirpc3t64 libtool libtsan2 libubsan1 linux-libc-dev m4 media-types
po-debconf procps pybind11-dev pybuild-plugin-pyproject python3 python3-all
python3-all-dev python3-autocommand python3-build python3-decorator
python3-dev python3-inflect python3-iniconfig python3-installer
python3-jaraco.context python3-jaraco.functools python3-joblib
python3-minimal python3-more-itertools python3-numpy python3-packaging
python3-pkg-resources python3-pluggy python3-pybind11
python3-pyproject-hooks python3-pytest python3-scipy python3-setuptools
python3-setuptools-scm python3-sklearn python3-sklearn-lib
python3-threadpoolctl python3-toml python3-typeguard
python3-typing-extensions python3-wheel python3-zipp python3.12
python3.12-dev python3.12-minimal python3.13 python3.13-dev
python3.13-minimal rpcsvc-proto sbuild-build-depends-main-dummy zlib1g-dev
0 upgraded, 118 newly installed, 0 to remove and 0 not upgraded.
Need to get 95.9 MB/123 MB of archives.
After this operation, 577 MB of additional disk space will be used.
Get:1 copy:/<<RESOLVERDIR>>/apt_archive ./ sbuild-build-depends-main-dummy 0.invalid.0 [968 B]
Get:2 file:/<<BUILDDIR>>/resolver-HzbGeX/apt_archive ./ python3-minimal 3.12.6-1+debusine1 [26.8 kB]
Get:3 file:/<<BUILDDIR>>/resolver-HzbGeX/apt_archive ./ libpython3-stdlib 3.12.6-1+debusine1 [9732 B]
Get:4 file:/<<BUILDDIR>>/resolver-HzbGeX/apt_archive ./ python3 3.12.6-1+debusine1 [27.8 kB]
Get:5 file:/<<BUILDDIR>>/resolver-HzbGeX/apt_archive ./ libpython3-dev 3.12.6-1+debusine1 [9992 B]
Get:6 file:/<<BUILDDIR>>/resolver-HzbGeX/apt_archive ./ libpython3-all-dev 3.12.6-1+debusine1 [1084 B]
Get:7 file:/<<BUILDDIR>>/resolver-HzbGeX/apt_archive ./ python3-all 3.12.6-1+debusine1 [1056 B]
Get:8 file:/<<BUILDDIR>>/resolver-HzbGeX/apt_archive ./ python3-dev 3.12.6-1+debusine1 [26.1 kB]
Get:9 file:/<<BUILDDIR>>/resolver-HzbGeX/apt_archive ./ python3-all-dev 3.12.6-1+debusine1 [1084 B]
Get:10 http://deb.debian.org/debian sid/main arm64 libpython3.12-minimal arm64 3.12.7-1 [807 kB]
Get:11 file:/<<BUILDDIR>>/resolver-HzbGeX/apt_archive ./ python3-numpy 1:1.26.4+ds-11+bootstrap1 [3815 kB]
Get:12 http://deb.debian.org/debian sid/main arm64 libexpat1 arm64 2.6.3-1 [90.2 kB]
Get:13 http://deb.debian.org/debian sid/main arm64 python3.12-minimal arm64 3.12.7-1 [1941 kB]
Get:14 http://deb.debian.org/debian sid/main arm64 media-types all 10.1.0 [26.9 kB]
Get:15 http://deb.debian.org/debian sid/main arm64 libncursesw6 arm64 6.5-2 [124 kB]
Get:16 http://deb.debian.org/debian sid/main arm64 libtirpc-common all 1.3.4+ds-1.3 [10.9 kB]
Get:17 http://deb.debian.org/debian sid/main arm64 libtirpc3t64 arm64 1.3.4+ds-1.3 [78.4 kB]
Get:18 http://deb.debian.org/debian sid/main arm64 libnsl2 arm64 1.3.0-3+b2 [37.7 kB]
Get:19 http://deb.debian.org/debian sid/main arm64 libpython3.12-stdlib arm64 3.12.7-1 [1901 kB]
Get:20 http://deb.debian.org/debian sid/main arm64 python3.12 arm64 3.12.7-1 [671 kB]
Get:21 http://deb.debian.org/debian sid/main arm64 libpython3.13-minimal arm64 3.13.0~rc3-1 [849 kB]
Get:22 http://deb.debian.org/debian sid/main arm64 python3.13-minimal arm64 3.13.0~rc3-1 [1835 kB]
Get:23 http://deb.debian.org/debian sid/main arm64 libproc2-0 arm64 2:4.0.4-6 [62.3 kB]
Get:24 http://deb.debian.org/debian sid/main arm64 procps arm64 2:4.0.4-6 [872 kB]
Get:25 http://deb.debian.org/debian sid/main arm64 m4 arm64 1.4.19-4 [277 kB]
Get:26 http://deb.debian.org/debian sid/main arm64 autoconf all 2.72-3 [493 kB]
Get:27 http://deb.debian.org/debian sid/main arm64 autotools-dev all 20220109.1 [51.6 kB]
Get:28 http://deb.debian.org/debian sid/main arm64 automake all 1:1.16.5-1.3 [823 kB]
Get:29 file:/<<BUILDDIR>>/resolver-HzbGeX/apt_archive ./ python3-scipy 1.13.1-5+nocheck1 [18.1 MB]
Get:30 http://deb.debian.org/debian sid/main arm64 autopoint all 0.22.5-2 [723 kB]
Get:31 http://deb.debian.org/debian sid/main arm64 libc-dev-bin arm64 2.40-3 [50.9 kB]
Get:32 http://deb.debian.org/debian sid/main arm64 linux-libc-dev all 6.10.12-1 [2400 kB]
Get:33 http://deb.debian.org/debian sid/main arm64 libcrypt-dev arm64 1:4.4.36-5 [122 kB]
Get:34 file:/<<BUILDDIR>>/resolver-HzbGeX/apt_archive ./ python3-sklearn-lib 1.4.2+dfsg-6+debusine1 [5386 kB]
Get:35 http://deb.debian.org/debian sid/main arm64 rpcsvc-proto arm64 1.4.3-1 [59.7 kB]
Get:36 http://deb.debian.org/debian sid/main arm64 libc6-dev arm64 2.40-3 [1591 kB]
Get:37 http://deb.debian.org/debian sid/main arm64 libisl23 arm64 0.27-1 [601 kB]
Get:38 http://deb.debian.org/debian sid/main arm64 libmpfr6 arm64 4.2.1-1+b1 [674 kB]
Get:39 http://deb.debian.org/debian sid/main arm64 libmpc3 arm64 1.3.1-1+b2 [50.2 kB]
Get:40 http://deb.debian.org/debian sid/main arm64 cpp-14-aarch64-linux-gnu arm64 14.2.0-5 [9161 kB]
Get:41 http://deb.debian.org/debian sid/main arm64 cpp-14 arm64 14.2.0-5 [1280 B]
Get:42 http://deb.debian.org/debian sid/main arm64 cpp-aarch64-linux-gnu arm64 4:14.1.0-2 [4792 B]
Get:43 http://deb.debian.org/debian sid/main arm64 cpp arm64 4:14.1.0-2 [1572 B]
Get:44 http://deb.debian.org/debian sid/main arm64 libcc1-0 arm64 14.2.0-5 [42.0 kB]
Get:45 http://deb.debian.org/debian sid/main arm64 libitm1 arm64 14.2.0-5 [24.2 kB]
Get:46 http://deb.debian.org/debian sid/main arm64 libasan8 arm64 14.2.0-5 [2578 kB]
Get:47 http://deb.debian.org/debian sid/main arm64 liblsan0 arm64 14.2.0-5 [1162 kB]
Get:48 http://deb.debian.org/debian sid/main arm64 libtsan2 arm64 14.2.0-5 [2385 kB]
Get:49 http://deb.debian.org/debian sid/main arm64 libubsan1 arm64 14.2.0-5 [1040 kB]
Get:50 http://deb.debian.org/debian sid/main arm64 libhwasan0 arm64 14.2.0-5 [1442 kB]
Get:51 http://deb.debian.org/debian sid/main arm64 libgcc-14-dev arm64 14.2.0-5 [2363 kB]
Get:52 http://deb.debian.org/debian sid/main arm64 gcc-14-aarch64-linux-gnu arm64 14.2.0-5 [17.7 MB]
Get:53 http://deb.debian.org/debian sid/main arm64 gcc-14 arm64 14.2.0-5 [513 kB]
Get:54 http://deb.debian.org/debian sid/main arm64 gcc-aarch64-linux-gnu arm64 4:14.1.0-2 [1440 B]
Get:55 http://deb.debian.org/debian sid/main arm64 gcc arm64 4:14.1.0-2 [5136 B]
Get:56 http://deb.debian.org/debian sid/main arm64 libstdc++-14-dev arm64 14.2.0-5 [2263 kB]
Get:57 http://deb.debian.org/debian sid/main arm64 g++-14-aarch64-linux-gnu arm64 14.2.0-5 [10.1 MB]
Get:58 http://deb.debian.org/debian sid/main arm64 g++-14 arm64 14.2.0-5 [19.7 kB]
Get:59 http://deb.debian.org/debian sid/main arm64 g++-aarch64-linux-gnu arm64 4:14.1.0-2 [1200 B]
Get:60 http://deb.debian.org/debian sid/main arm64 g++ arm64 4:14.1.0-2 [1328 B]
Get:61 http://deb.debian.org/debian sid/main arm64 build-essential arm64 12.10 [4516 B]
Get:62 http://deb.debian.org/debian sid/main arm64 libdebhelper-perl all 13.20 [89.7 kB]
Get:63 http://deb.debian.org/debian sid/main arm64 libtool all 2.4.7-7 [517 kB]
Get:64 http://deb.debian.org/debian sid/main arm64 dh-autoreconf all 20 [17.1 kB]
Get:65 http://deb.debian.org/debian sid/main arm64 libfile-stripnondeterminism-perl all 1.14.0-1 [19.5 kB]
Get:66 http://deb.debian.org/debian sid/main arm64 dh-strip-nondeterminism all 1.14.0-1 [8448 B]
Get:67 http://deb.debian.org/debian sid/main arm64 libelf1t64 arm64 0.191-2 [188 kB]
Get:68 http://deb.debian.org/debian sid/main arm64 dwz arm64 0.15-1+b1 [102 kB]
Get:69 http://deb.debian.org/debian sid/main arm64 po-debconf all 1.0.21+nmu1 [248 kB]
Get:70 http://deb.debian.org/debian sid/main arm64 debhelper all 13.20 [915 kB]
Get:71 http://deb.debian.org/debian sid/main arm64 python3-autocommand all 2.2.2-3 [13.6 kB]
Get:72 http://deb.debian.org/debian sid/main arm64 python3-more-itertools all 10.5.0-1 [63.8 kB]
Get:73 http://deb.debian.org/debian sid/main arm64 python3-typing-extensions all 4.12.2-2 [73.0 kB]
Get:74 http://deb.debian.org/debian sid/main arm64 python3-typeguard all 4.3.0-1 [36.5 kB]
Get:75 http://deb.debian.org/debian sid/main arm64 python3-inflect all 7.3.1-2 [32.4 kB]
Get:76 http://deb.debian.org/debian sid/main arm64 python3-jaraco.context all 6.0.0-1 [7984 B]
Get:77 http://deb.debian.org/debian sid/main arm64 python3-jaraco.functools all 4.1.0-1 [12.0 kB]
Get:78 http://deb.debian.org/debian sid/main arm64 python3-pkg-resources all 74.1.2-2 [213 kB]
Get:79 http://deb.debian.org/debian sid/main arm64 python3-zipp all 3.20.2-1 [10.3 kB]
Get:80 http://deb.debian.org/debian sid/main arm64 python3-setuptools all 74.1.2-2 [736 kB]
Get:81 http://deb.debian.org/debian sid/main arm64 dh-python all 6.20240824 [109 kB]
Get:82 http://deb.debian.org/debian sid/main arm64 dumb-init arm64 1.2.5-3 [13.4 kB]
Get:83 http://deb.debian.org/debian sid/main arm64 libfakeroot arm64 1.36-1 [29.1 kB]
Get:84 http://deb.debian.org/debian sid/main arm64 fakeroot arm64 1.36-1 [74.4 kB]
Get:85 http://deb.debian.org/debian sid/main arm64 libblas3 arm64 3.12.0-3 [91.7 kB]
Get:86 http://deb.debian.org/debian sid/main arm64 libexpat1-dev arm64 2.6.3-1 [142 kB]
Get:87 http://deb.debian.org/debian sid/main arm64 libgfortran5 arm64 14.2.0-5 [361 kB]
Get:88 http://deb.debian.org/debian sid/main arm64 libjs-jquery all 3.6.1+dfsg+~3.5.14-1 [326 kB]
Get:89 http://deb.debian.org/debian sid/main arm64 libjs-underscore all 1.13.4~dfsg+~1.11.4-3 [116 kB]
Get:90 http://deb.debian.org/debian sid/main arm64 libjs-sphinxdoc all 7.4.7-3 [158 kB]
Get:91 http://deb.debian.org/debian sid/main arm64 liblapack3 arm64 3.12.0-3 [1757 kB]
Get:92 http://deb.debian.org/debian sid/main arm64 liblbfgsb0 arm64 3.0+dfsg.4-1+b1 [25.0 kB]
Get:93 http://deb.debian.org/debian sid/main arm64 libpython3.12t64 arm64 3.12.7-1 [1982 kB]
Get:94 http://deb.debian.org/debian sid/main arm64 zlib1g-dev arm64 1:1.3.dfsg+really1.3.1-1 [916 kB]
Get:95 http://deb.debian.org/debian sid/main arm64 libpython3.12-dev arm64 3.12.7-1 [4793 kB]
Get:96 http://deb.debian.org/debian sid/main arm64 libpython3.13-stdlib arm64 3.13.0~rc3-1 [1920 kB]
Get:97 http://deb.debian.org/debian sid/main arm64 libpython3.13 arm64 3.13.0~rc3-1 [1959 kB]
Get:98 http://deb.debian.org/debian sid/main arm64 libpython3.13-dev arm64 3.13.0~rc3-1 [4668 kB]
Get:99 http://deb.debian.org/debian sid/main arm64 pybind11-dev all 2.13.6-1 [204 kB]
Get:100 http://deb.debian.org/debian sid/main arm64 python3-packaging all 24.1-1 [45.8 kB]
Get:101 http://deb.debian.org/debian sid/main arm64 python3-pyproject-hooks all 1.1.0-2 [11.3 kB]
Get:102 http://deb.debian.org/debian sid/main arm64 python3-toml all 0.10.2-1 [16.2 kB]
Get:103 http://deb.debian.org/debian sid/main arm64 python3-wheel all 0.44.0-2 [53.4 kB]
Get:104 http://deb.debian.org/debian sid/main arm64 python3-build all 1.2.2-1 [36.0 kB]
Get:105 http://deb.debian.org/debian sid/main arm64 python3-installer all 0.7.0+dfsg1-3 [18.6 kB]
Get:106 http://deb.debian.org/debian sid/main arm64 pybuild-plugin-pyproject all 6.20240824 [11.2 kB]
Get:107 http://deb.debian.org/debian sid/main arm64 python3.13 arm64 3.13.0~rc3-1 [730 kB]
Get:108 http://deb.debian.org/debian sid/main arm64 python3.12-dev arm64 3.12.7-1 [505 kB]
Get:109 http://deb.debian.org/debian sid/main arm64 python3.13-dev arm64 3.13.0~rc3-1 [504 kB]
Get:110 http://deb.debian.org/debian sid/main arm64 python3-decorator all 5.1.1-5 [15.1 kB]
Get:111 http://deb.debian.org/debian sid/main arm64 python3-iniconfig all 1.1.1-2 [6396 B]
Get:112 http://deb.debian.org/debian sid/main arm64 python3-joblib all 1.3.2-3 [216 kB]
Get:113 http://deb.debian.org/debian sid/main arm64 python3-pluggy all 1.5.0-1 [26.9 kB]
Get:114 http://deb.debian.org/debian sid/main arm64 python3-pybind11 all 2.13.6-1 [215 kB]
Get:115 http://deb.debian.org/debian sid/main arm64 python3-pytest all 8.3.3-1 [249 kB]
Get:116 http://deb.debian.org/debian sid/main arm64 python3-setuptools-scm all 8.1.0-1 [40.5 kB]
Get:117 http://deb.debian.org/debian sid/main arm64 python3-threadpoolctl all 3.1.0-1 [21.2 kB]
Get:118 http://deb.debian.org/debian sid/main arm64 python3-sklearn all 1.4.2+dfsg-6 [2248 kB]
debconf: delaying package configuration, since apt-utils is not installed
Fetched 95.9 MB in 1s (99.1 MB/s)
Selecting previously unselected package libpython3.12-minimal:arm64.
(Reading database ... 16920 files and directories currently installed.)
Preparing to unpack .../libpython3.12-minimal_3.12.7-1_arm64.deb ...
Unpacking libpython3.12-minimal:arm64 (3.12.7-1) ...
Selecting previously unselected package libexpat1:arm64.
Preparing to unpack .../libexpat1_2.6.3-1_arm64.deb ...
Unpacking libexpat1:arm64 (2.6.3-1) ...
Selecting previously unselected package python3.12-minimal.
Preparing to unpack .../python3.12-minimal_3.12.7-1_arm64.deb ...
Unpacking python3.12-minimal (3.12.7-1) ...
Setting up libpython3.12-minimal:arm64 (3.12.7-1) ...
Setting up libexpat1:arm64 (2.6.3-1) ...
Setting up python3.12-minimal (3.12.7-1) ...
Selecting previously unselected package python3-minimal.
(Reading database ... 17240 files and directories currently installed.)
Preparing to unpack .../0-python3-minimal_3.12.6-1+debusine1_arm64.deb ...
Unpacking python3-minimal (3.12.6-1+debusine1) ...
Selecting previously unselected package media-types.
Preparing to unpack .../1-media-types_10.1.0_all.deb ...
Unpacking media-types (10.1.0) ...
Selecting previously unselected package libncursesw6:arm64.
Preparing to unpack .../2-libncursesw6_6.5-2_arm64.deb ...
Unpacking libncursesw6:arm64 (6.5-2) ...
Selecting previously unselected package libtirpc-common.
Preparing to unpack .../3-libtirpc-common_1.3.4+ds-1.3_all.deb ...
Unpacking libtirpc-common (1.3.4+ds-1.3) ...
Selecting previously unselected package libtirpc3t64:arm64.
Preparing to unpack .../4-libtirpc3t64_1.3.4+ds-1.3_arm64.deb ...
Adding 'diversion of /lib/aarch64-linux-gnu/libtirpc.so.3 to /lib/aarch64-linux-gnu/libtirpc.so.3.usr-is-merged by libtirpc3t64'
Adding 'diversion of /lib/aarch64-linux-gnu/libtirpc.so.3.0.0 to /lib/aarch64-linux-gnu/libtirpc.so.3.0.0.usr-is-merged by libtirpc3t64'
Unpacking libtirpc3t64:arm64 (1.3.4+ds-1.3) ...
Selecting previously unselected package libnsl2:arm64.
Preparing to unpack .../5-libnsl2_1.3.0-3+b2_arm64.deb ...
Unpacking libnsl2:arm64 (1.3.0-3+b2) ...
Selecting previously unselected package libpython3.12-stdlib:arm64.
Preparing to unpack .../6-libpython3.12-stdlib_3.12.7-1_arm64.deb ...
Unpacking libpython3.12-stdlib:arm64 (3.12.7-1) ...
Selecting previously unselected package python3.12.
Preparing to unpack .../7-python3.12_3.12.7-1_arm64.deb ...
Unpacking python3.12 (3.12.7-1) ...
Selecting previously unselected package libpython3-stdlib:arm64.
Preparing to unpack .../8-libpython3-stdlib_3.12.6-1+debusine1_arm64.deb ...
Unpacking libpython3-stdlib:arm64 (3.12.6-1+debusine1) ...
Setting up python3-minimal (3.12.6-1+debusine1) ...
Selecting previously unselected package python3.
(Reading database ... 17712 files and directories currently installed.)
Preparing to unpack .../000-python3_3.12.6-1+debusine1_arm64.deb ...
Unpacking python3 (3.12.6-1+debusine1) ...
Selecting previously unselected package libpython3.13-minimal:arm64.
Preparing to unpack .../001-libpython3.13-minimal_3.13.0~rc3-1_arm64.deb ...
Unpacking libpython3.13-minimal:arm64 (3.13.0~rc3-1) ...
Selecting previously unselected package python3.13-minimal.
Preparing to unpack .../002-python3.13-minimal_3.13.0~rc3-1_arm64.deb ...
Unpacking python3.13-minimal (3.13.0~rc3-1) ...
Selecting previously unselected package libproc2-0:arm64.
Preparing to unpack .../003-libproc2-0_2%3a4.0.4-6_arm64.deb ...
Unpacking libproc2-0:arm64 (2:4.0.4-6) ...
Selecting previously unselected package procps.
Preparing to unpack .../004-procps_2%3a4.0.4-6_arm64.deb ...
Unpacking procps (2:4.0.4-6) ...
Selecting previously unselected package m4.
Preparing to unpack .../005-m4_1.4.19-4_arm64.deb ...
Unpacking m4 (1.4.19-4) ...
Selecting previously unselected package autoconf.
Preparing to unpack .../006-autoconf_2.72-3_all.deb ...
Unpacking autoconf (2.72-3) ...
Selecting previously unselected package autotools-dev.
Preparing to unpack .../007-autotools-dev_20220109.1_all.deb ...
Unpacking autotools-dev (20220109.1) ...
Selecting previously unselected package automake.
Preparing to unpack .../008-automake_1%3a1.16.5-1.3_all.deb ...
Unpacking automake (1:1.16.5-1.3) ...
Selecting previously unselected package autopoint.
Preparing to unpack .../009-autopoint_0.22.5-2_all.deb ...
Unpacking autopoint (0.22.5-2) ...
Selecting previously unselected package libc-dev-bin.
Preparing to unpack .../010-libc-dev-bin_2.40-3_arm64.deb ...
Unpacking libc-dev-bin (2.40-3) ...
Selecting previously unselected package linux-libc-dev.
Preparing to unpack .../011-linux-libc-dev_6.10.12-1_all.deb ...
Unpacking linux-libc-dev (6.10.12-1) ...
Selecting previously unselected package libcrypt-dev:arm64.
Preparing to unpack .../012-libcrypt-dev_1%3a4.4.36-5_arm64.deb ...
Unpacking libcrypt-dev:arm64 (1:4.4.36-5) ...
Selecting previously unselected package rpcsvc-proto.
Preparing to unpack .../013-rpcsvc-proto_1.4.3-1_arm64.deb ...
Unpacking rpcsvc-proto (1.4.3-1) ...
Selecting previously unselected package libc6-dev:arm64.
Preparing to unpack .../014-libc6-dev_2.40-3_arm64.deb ...
Unpacking libc6-dev:arm64 (2.40-3) ...
Selecting previously unselected package libisl23:arm64.
Preparing to unpack .../015-libisl23_0.27-1_arm64.deb ...
Unpacking libisl23:arm64 (0.27-1) ...
Selecting previously unselected package libmpfr6:arm64.
Preparing to unpack .../016-libmpfr6_4.2.1-1+b1_arm64.deb ...
Unpacking libmpfr6:arm64 (4.2.1-1+b1) ...
Selecting previously unselected package libmpc3:arm64.
Preparing to unpack .../017-libmpc3_1.3.1-1+b2_arm64.deb ...
Unpacking libmpc3:arm64 (1.3.1-1+b2) ...
Selecting previously unselected package cpp-14-aarch64-linux-gnu.
Preparing to unpack .../018-cpp-14-aarch64-linux-gnu_14.2.0-5_arm64.deb ...
Unpacking cpp-14-aarch64-linux-gnu (14.2.0-5) ...
Selecting previously unselected package cpp-14.
Preparing to unpack .../019-cpp-14_14.2.0-5_arm64.deb ...
Unpacking cpp-14 (14.2.0-5) ...
Selecting previously unselected package cpp-aarch64-linux-gnu.
Preparing to unpack .../020-cpp-aarch64-linux-gnu_4%3a14.1.0-2_arm64.deb ...
Unpacking cpp-aarch64-linux-gnu (4:14.1.0-2) ...
Selecting previously unselected package cpp.
Preparing to unpack .../021-cpp_4%3a14.1.0-2_arm64.deb ...
Unpacking cpp (4:14.1.0-2) ...
Selecting previously unselected package libcc1-0:arm64.
Preparing to unpack .../022-libcc1-0_14.2.0-5_arm64.deb ...
Unpacking libcc1-0:arm64 (14.2.0-5) ...
Selecting previously unselected package libitm1:arm64.
Preparing to unpack .../023-libitm1_14.2.0-5_arm64.deb ...
Unpacking libitm1:arm64 (14.2.0-5) ...
Selecting previously unselected package libasan8:arm64.
Preparing to unpack .../024-libasan8_14.2.0-5_arm64.deb ...
Unpacking libasan8:arm64 (14.2.0-5) ...
Selecting previously unselected package liblsan0:arm64.
Preparing to unpack .../025-liblsan0_14.2.0-5_arm64.deb ...
Unpacking liblsan0:arm64 (14.2.0-5) ...
Selecting previously unselected package libtsan2:arm64.
Preparing to unpack .../026-libtsan2_14.2.0-5_arm64.deb ...
Unpacking libtsan2:arm64 (14.2.0-5) ...
Selecting previously unselected package libubsan1:arm64.
Preparing to unpack .../027-libubsan1_14.2.0-5_arm64.deb ...
Unpacking libubsan1:arm64 (14.2.0-5) ...
Selecting previously unselected package libhwasan0:arm64.
Preparing to unpack .../028-libhwasan0_14.2.0-5_arm64.deb ...
Unpacking libhwasan0:arm64 (14.2.0-5) ...
Selecting previously unselected package libgcc-14-dev:arm64.
Preparing to unpack .../029-libgcc-14-dev_14.2.0-5_arm64.deb ...
Unpacking libgcc-14-dev:arm64 (14.2.0-5) ...
Selecting previously unselected package gcc-14-aarch64-linux-gnu.
Preparing to unpack .../030-gcc-14-aarch64-linux-gnu_14.2.0-5_arm64.deb ...
Unpacking gcc-14-aarch64-linux-gnu (14.2.0-5) ...
Selecting previously unselected package gcc-14.
Preparing to unpack .../031-gcc-14_14.2.0-5_arm64.deb ...
Unpacking gcc-14 (14.2.0-5) ...
Selecting previously unselected package gcc-aarch64-linux-gnu.
Preparing to unpack .../032-gcc-aarch64-linux-gnu_4%3a14.1.0-2_arm64.deb ...
Unpacking gcc-aarch64-linux-gnu (4:14.1.0-2) ...
Selecting previously unselected package gcc.
Preparing to unpack .../033-gcc_4%3a14.1.0-2_arm64.deb ...
Unpacking gcc (4:14.1.0-2) ...
Selecting previously unselected package libstdc++-14-dev:arm64.
Preparing to unpack .../034-libstdc++-14-dev_14.2.0-5_arm64.deb ...
Unpacking libstdc++-14-dev:arm64 (14.2.0-5) ...
Selecting previously unselected package g++-14-aarch64-linux-gnu.
Preparing to unpack .../035-g++-14-aarch64-linux-gnu_14.2.0-5_arm64.deb ...
Unpacking g++-14-aarch64-linux-gnu (14.2.0-5) ...
Selecting previously unselected package g++-14.
Preparing to unpack .../036-g++-14_14.2.0-5_arm64.deb ...
Unpacking g++-14 (14.2.0-5) ...
Selecting previously unselected package g++-aarch64-linux-gnu.
Preparing to unpack .../037-g++-aarch64-linux-gnu_4%3a14.1.0-2_arm64.deb ...
Unpacking g++-aarch64-linux-gnu (4:14.1.0-2) ...
Selecting previously unselected package g++.
Preparing to unpack .../038-g++_4%3a14.1.0-2_arm64.deb ...
Unpacking g++ (4:14.1.0-2) ...
Selecting previously unselected package build-essential.
Preparing to unpack .../039-build-essential_12.10_arm64.deb ...
Unpacking build-essential (12.10) ...
Selecting previously unselected package libdebhelper-perl.
Preparing to unpack .../040-libdebhelper-perl_13.20_all.deb ...
Unpacking libdebhelper-perl (13.20) ...
Selecting previously unselected package libtool.
Preparing to unpack .../041-libtool_2.4.7-7_all.deb ...
Unpacking libtool (2.4.7-7) ...
Selecting previously unselected package dh-autoreconf.
Preparing to unpack .../042-dh-autoreconf_20_all.deb ...
Unpacking dh-autoreconf (20) ...
Selecting previously unselected package libfile-stripnondeterminism-perl.
Preparing to unpack .../043-libfile-stripnondeterminism-perl_1.14.0-1_all.deb ...
Unpacking libfile-stripnondeterminism-perl (1.14.0-1) ...
Selecting previously unselected package dh-strip-nondeterminism.
Preparing to unpack .../044-dh-strip-nondeterminism_1.14.0-1_all.deb ...
Unpacking dh-strip-nondeterminism (1.14.0-1) ...
Selecting previously unselected package libelf1t64:arm64.
Preparing to unpack .../045-libelf1t64_0.191-2_arm64.deb ...
Unpacking libelf1t64:arm64 (0.191-2) ...
Selecting previously unselected package dwz.
Preparing to unpack .../046-dwz_0.15-1+b1_arm64.deb ...
Unpacking dwz (0.15-1+b1) ...
Selecting previously unselected package po-debconf.
Preparing to unpack .../047-po-debconf_1.0.21+nmu1_all.deb ...
Unpacking po-debconf (1.0.21+nmu1) ...
Selecting previously unselected package debhelper.
Preparing to unpack .../048-debhelper_13.20_all.deb ...
Unpacking debhelper (13.20) ...
Selecting previously unselected package python3-autocommand.
Preparing to unpack .../049-python3-autocommand_2.2.2-3_all.deb ...
Unpacking python3-autocommand (2.2.2-3) ...
Selecting previously unselected package python3-more-itertools.
Preparing to unpack .../050-python3-more-itertools_10.5.0-1_all.deb ...
Unpacking python3-more-itertools (10.5.0-1) ...
Selecting previously unselected package python3-typing-extensions.
Preparing to unpack .../051-python3-typing-extensions_4.12.2-2_all.deb ...
Unpacking python3-typing-extensions (4.12.2-2) ...
Selecting previously unselected package python3-typeguard.
Preparing to unpack .../052-python3-typeguard_4.3.0-1_all.deb ...
Unpacking python3-typeguard (4.3.0-1) ...
Selecting previously unselected package python3-inflect.
Preparing to unpack .../053-python3-inflect_7.3.1-2_all.deb ...
Unpacking python3-inflect (7.3.1-2) ...
Selecting previously unselected package python3-jaraco.context.
Preparing to unpack .../054-python3-jaraco.context_6.0.0-1_all.deb ...
Unpacking python3-jaraco.context (6.0.0-1) ...
Selecting previously unselected package python3-jaraco.functools.
Preparing to unpack .../055-python3-jaraco.functools_4.1.0-1_all.deb ...
Unpacking python3-jaraco.functools (4.1.0-1) ...
Selecting previously unselected package python3-pkg-resources.
Preparing to unpack .../056-python3-pkg-resources_74.1.2-2_all.deb ...
Unpacking python3-pkg-resources (74.1.2-2) ...
Selecting previously unselected package python3-zipp.
Preparing to unpack .../057-python3-zipp_3.20.2-1_all.deb ...
Unpacking python3-zipp (3.20.2-1) ...
Selecting previously unselected package python3-setuptools.
Preparing to unpack .../058-python3-setuptools_74.1.2-2_all.deb ...
Unpacking python3-setuptools (74.1.2-2) ...
Selecting previously unselected package dh-python.
Preparing to unpack .../059-dh-python_6.20240824_all.deb ...
Unpacking dh-python (6.20240824) ...
Selecting previously unselected package dumb-init.
Preparing to unpack .../060-dumb-init_1.2.5-3_arm64.deb ...
Unpacking dumb-init (1.2.5-3) ...
Selecting previously unselected package libfakeroot:arm64.
Preparing to unpack .../061-libfakeroot_1.36-1_arm64.deb ...
Unpacking libfakeroot:arm64 (1.36-1) ...
Selecting previously unselected package fakeroot.
Preparing to unpack .../062-fakeroot_1.36-1_arm64.deb ...
Unpacking fakeroot (1.36-1) ...
Selecting previously unselected package libblas3:arm64.
Preparing to unpack .../063-libblas3_3.12.0-3_arm64.deb ...
Unpacking libblas3:arm64 (3.12.0-3) ...
Selecting previously unselected package libexpat1-dev:arm64.
Preparing to unpack .../064-libexpat1-dev_2.6.3-1_arm64.deb ...
Unpacking libexpat1-dev:arm64 (2.6.3-1) ...
Selecting previously unselected package libgfortran5:arm64.
Preparing to unpack .../065-libgfortran5_14.2.0-5_arm64.deb ...
Unpacking libgfortran5:arm64 (14.2.0-5) ...
Selecting previously unselected package libjs-jquery.
Preparing to unpack .../066-libjs-jquery_3.6.1+dfsg+~3.5.14-1_all.deb ...
Unpacking libjs-jquery (3.6.1+dfsg+~3.5.14-1) ...
Selecting previously unselected package libjs-underscore.
Preparing to unpack .../067-libjs-underscore_1.13.4~dfsg+~1.11.4-3_all.deb ...
Unpacking libjs-underscore (1.13.4~dfsg+~1.11.4-3) ...
Selecting previously unselected package libjs-sphinxdoc.
Preparing to unpack .../068-libjs-sphinxdoc_7.4.7-3_all.deb ...
Unpacking libjs-sphinxdoc (7.4.7-3) ...
Selecting previously unselected package liblapack3:arm64.
Preparing to unpack .../069-liblapack3_3.12.0-3_arm64.deb ...
Unpacking liblapack3:arm64 (3.12.0-3) ...
Selecting previously unselected package liblbfgsb0:arm64.
Preparing to unpack .../070-liblbfgsb0_3.0+dfsg.4-1+b1_arm64.deb ...
Unpacking liblbfgsb0:arm64 (3.0+dfsg.4-1+b1) ...
Selecting previously unselected package libpython3.12t64:arm64.
Preparing to unpack .../071-libpython3.12t64_3.12.7-1_arm64.deb ...
Unpacking libpython3.12t64:arm64 (3.12.7-1) ...
Selecting previously unselected package zlib1g-dev:arm64.
Preparing to unpack .../072-zlib1g-dev_1%3a1.3.dfsg+really1.3.1-1_arm64.deb ...
Unpacking zlib1g-dev:arm64 (1:1.3.dfsg+really1.3.1-1) ...
Selecting previously unselected package libpython3.12-dev:arm64.
Preparing to unpack .../073-libpython3.12-dev_3.12.7-1_arm64.deb ...
Unpacking libpython3.12-dev:arm64 (3.12.7-1) ...
Selecting previously unselected package libpython3-dev:arm64.
Preparing to unpack .../074-libpython3-dev_3.12.6-1+debusine1_arm64.deb ...
Unpacking libpython3-dev:arm64 (3.12.6-1+debusine1) ...
Selecting previously unselected package libpython3.13-stdlib:arm64.
Preparing to unpack .../075-libpython3.13-stdlib_3.13.0~rc3-1_arm64.deb ...
Unpacking libpython3.13-stdlib:arm64 (3.13.0~rc3-1) ...
Selecting previously unselected package libpython3.13:arm64.
Preparing to unpack .../076-libpython3.13_3.13.0~rc3-1_arm64.deb ...
Unpacking libpython3.13:arm64 (3.13.0~rc3-1) ...
Selecting previously unselected package libpython3.13-dev:arm64.
Preparing to unpack .../077-libpython3.13-dev_3.13.0~rc3-1_arm64.deb ...
Unpacking libpython3.13-dev:arm64 (3.13.0~rc3-1) ...
Selecting previously unselected package libpython3-all-dev:arm64.
Preparing to unpack .../078-libpython3-all-dev_3.12.6-1+debusine1_arm64.deb ...
Unpacking libpython3-all-dev:arm64 (3.12.6-1+debusine1) ...
Selecting previously unselected package pybind11-dev.
Preparing to unpack .../079-pybind11-dev_2.13.6-1_all.deb ...
Unpacking pybind11-dev (2.13.6-1) ...
Selecting previously unselected package python3-packaging.
Preparing to unpack .../080-python3-packaging_24.1-1_all.deb ...
Unpacking python3-packaging (24.1-1) ...
Selecting previously unselected package python3-pyproject-hooks.
Preparing to unpack .../081-python3-pyproject-hooks_1.1.0-2_all.deb ...
Unpacking python3-pyproject-hooks (1.1.0-2) ...
Selecting previously unselected package python3-toml.
Preparing to unpack .../082-python3-toml_0.10.2-1_all.deb ...
Unpacking python3-toml (0.10.2-1) ...
Selecting previously unselected package python3-wheel.
Preparing to unpack .../083-python3-wheel_0.44.0-2_all.deb ...
Unpacking python3-wheel (0.44.0-2) ...
Selecting previously unselected package python3-build.
Preparing to unpack .../084-python3-build_1.2.2-1_all.deb ...
Unpacking python3-build (1.2.2-1) ...
Selecting previously unselected package python3-installer.
Preparing to unpack .../085-python3-installer_0.7.0+dfsg1-3_all.deb ...
Unpacking python3-installer (0.7.0+dfsg1-3) ...
Selecting previously unselected package pybuild-plugin-pyproject.
Preparing to unpack .../086-pybuild-plugin-pyproject_6.20240824_all.deb ...
Unpacking pybuild-plugin-pyproject (6.20240824) ...
Selecting previously unselected package python3.13.
Preparing to unpack .../087-python3.13_3.13.0~rc3-1_arm64.deb ...
Unpacking python3.13 (3.13.0~rc3-1) ...
Selecting previously unselected package python3-all.
Preparing to unpack .../088-python3-all_3.12.6-1+debusine1_arm64.deb ...
Unpacking python3-all (3.12.6-1+debusine1) ...
Selecting previously unselected package python3.12-dev.
Preparing to unpack .../089-python3.12-dev_3.12.7-1_arm64.deb ...
Unpacking python3.12-dev (3.12.7-1) ...
Selecting previously unselected package python3-dev.
Preparing to unpack .../090-python3-dev_3.12.6-1+debusine1_arm64.deb ...
Unpacking python3-dev (3.12.6-1+debusine1) ...
Selecting previously unselected package python3.13-dev.
Preparing to unpack .../091-python3.13-dev_3.13.0~rc3-1_arm64.deb ...
Unpacking python3.13-dev (3.13.0~rc3-1) ...
Selecting previously unselected package python3-all-dev.
Preparing to unpack .../092-python3-all-dev_3.12.6-1+debusine1_arm64.deb ...
Unpacking python3-all-dev (3.12.6-1+debusine1) ...
Selecting previously unselected package python3-decorator.
Preparing to unpack .../093-python3-decorator_5.1.1-5_all.deb ...
Unpacking python3-decorator (5.1.1-5) ...
Selecting previously unselected package python3-iniconfig.
Preparing to unpack .../094-python3-iniconfig_1.1.1-2_all.deb ...
Unpacking python3-iniconfig (1.1.1-2) ...
Selecting previously unselected package python3-joblib.
Preparing to unpack .../095-python3-joblib_1.3.2-3_all.deb ...
Unpacking python3-joblib (1.3.2-3) ...
Selecting previously unselected package python3-numpy.
Preparing to unpack .../096-python3-numpy_1.26.4+ds-11+bootstrap1_arm64.deb ...
Unpacking python3-numpy (1:1.26.4+ds-11+bootstrap1) ...
Selecting previously unselected package python3-pluggy.
Preparing to unpack .../097-python3-pluggy_1.5.0-1_all.deb ...
Unpacking python3-pluggy (1.5.0-1) ...
Selecting previously unselected package python3-pybind11.
Preparing to unpack .../098-python3-pybind11_2.13.6-1_all.deb ...
Unpacking python3-pybind11 (2.13.6-1) ...
Selecting previously unselected package python3-pytest.
Preparing to unpack .../099-python3-pytest_8.3.3-1_all.deb ...
Unpacking python3-pytest (8.3.3-1) ...
Selecting previously unselected package python3-scipy.
Preparing to unpack .../100-python3-scipy_1.13.1-5+nocheck1_arm64.deb ...
Unpacking python3-scipy (1.13.1-5+nocheck1) ...
Selecting previously unselected package python3-setuptools-scm.
Preparing to unpack .../101-python3-setuptools-scm_8.1.0-1_all.deb ...
Unpacking python3-setuptools-scm (8.1.0-1) ...
Selecting previously unselected package python3-threadpoolctl.
Preparing to unpack .../102-python3-threadpoolctl_3.1.0-1_all.deb ...
Unpacking python3-threadpoolctl (3.1.0-1) ...
Selecting previously unselected package python3-sklearn-lib:arm64.
Preparing to unpack .../103-python3-sklearn-lib_1.4.2+dfsg-6+debusine1_arm64.deb ...
Unpacking python3-sklearn-lib:arm64 (1.4.2+dfsg-6+debusine1) ...
Selecting previously unselected package python3-sklearn.
Preparing to unpack .../104-python3-sklearn_1.4.2+dfsg-6_all.deb ...
Unpacking python3-sklearn (1.4.2+dfsg-6) ...
Selecting previously unselected package sbuild-build-depends-main-dummy.
Preparing to unpack .../105-sbuild-build-depends-main-dummy_0.invalid.0_arm64.deb ...
Unpacking sbuild-build-depends-main-dummy (0.invalid.0) ...
Setting up media-types (10.1.0) ...
Setting up dumb-init (1.2.5-3) ...
Setting up libfile-stripnondeterminism-perl (1.14.0-1) ...
Setting up libtirpc-common (1.3.4+ds-1.3) ...
Setting up po-debconf (1.0.21+nmu1) ...
Setting up libdebhelper-perl (13.20) ...
Setting up linux-libc-dev (6.10.12-1) ...
Setting up m4 (1.4.19-4) ...
Setting up libfakeroot:arm64 (1.36-1) ...
Setting up libelf1t64:arm64 (0.191-2) ...
Setting up fakeroot (1.36-1) ...
update-alternatives: using /usr/bin/fakeroot-sysv to provide /usr/bin/fakeroot (fakeroot) in auto mode
Setting up libpython3.13-minimal:arm64 (3.13.0~rc3-1) ...
Setting up autotools-dev (20220109.1) ...
Setting up libblas3:arm64 (3.12.0-3) ...
update-alternatives: using /usr/lib/aarch64-linux-gnu/blas/libblas.so.3 to provide /usr/lib/aarch64-linux-gnu/libblas.so.3 (libblas.so.3-aarch64-linux-gnu) in auto mode
Setting up rpcsvc-proto (1.4.3-1) ...
Setting up libmpfr6:arm64 (4.2.1-1+b1) ...
Setting up libproc2-0:arm64 (2:4.0.4-6) ...
Setting up libmpc3:arm64 (1.3.1-1+b2) ...
Setting up autopoint (0.22.5-2) ...
Setting up libncursesw6:arm64 (6.5-2) ...
Setting up libgfortran5:arm64 (14.2.0-5) ...
Setting up autoconf (2.72-3) ...
Setting up libubsan1:arm64 (14.2.0-5) ...
Setting up dh-strip-nondeterminism (1.14.0-1) ...
Setting up dwz (0.15-1+b1) ...
Setting up libhwasan0:arm64 (14.2.0-5) ...
Setting up libcrypt-dev:arm64 (1:4.4.36-5) ...
Setting up libasan8:arm64 (14.2.0-5) ...
Setting up procps (2:4.0.4-6) ...
Setting up python3.13-minimal (3.13.0~rc3-1) ...
Setting up libtsan2:arm64 (14.2.0-5) ...
Setting up libjs-jquery (3.6.1+dfsg+~3.5.14-1) ...
Setting up libisl23:arm64 (0.27-1) ...
Setting up libc-dev-bin (2.40-3) ...
Setting up libpython3.13-stdlib:arm64 (3.13.0~rc3-1) ...
Setting up libcc1-0:arm64 (14.2.0-5) ...
Setting up liblsan0:arm64 (14.2.0-5) ...
Setting up libitm1:arm64 (14.2.0-5) ...
Setting up libjs-underscore (1.13.4~dfsg+~1.11.4-3) ...
Setting up libpython3.13:arm64 (3.13.0~rc3-1) ...
Setting up automake (1:1.16.5-1.3) ...
update-alternatives: using /usr/bin/automake-1.16 to provide /usr/bin/automake (automake) in auto mode
Setting up liblapack3:arm64 (3.12.0-3) ...
update-alternatives: using /usr/lib/aarch64-linux-gnu/lapack/liblapack.so.3 to provide /usr/lib/aarch64-linux-gnu/liblapack.so.3 (liblapack.so.3-aarch64-linux-gnu) in auto mode
Setting up libtirpc3t64:arm64 (1.3.4+ds-1.3) ...
Setting up python3.13 (3.13.0~rc3-1) ...
Setting up libjs-sphinxdoc (7.4.7-3) ...
Setting up cpp-14-aarch64-linux-gnu (14.2.0-5) ...
Setting up libnsl2:arm64 (1.3.0-3+b2) ...
Setting up libc6-dev:arm64 (2.40-3) ...
Setting up libgcc-14-dev:arm64 (14.2.0-5) ...
Setting up libstdc++-14-dev:arm64 (14.2.0-5) ...
Setting up liblbfgsb0:arm64 (3.0+dfsg.4-1+b1) ...
Setting up libpython3.12-stdlib:arm64 (3.12.7-1) ...
Setting up python3.12 (3.12.7-1) ...
Setting up libpython3.12t64:arm64 (3.12.7-1) ...
Setting up cpp-aarch64-linux-gnu (4:14.1.0-2) ...
Setting up libexpat1-dev:arm64 (2.6.3-1) ...
Setting up cpp-14 (14.2.0-5) ...
Setting up zlib1g-dev:arm64 (1:1.3.dfsg+really1.3.1-1) ...
Setting up cpp (4:14.1.0-2) ...
Setting up gcc-14-aarch64-linux-gnu (14.2.0-5) ...
Setting up libpython3-stdlib:arm64 (3.12.6-1+debusine1) ...
Setting up gcc-aarch64-linux-gnu (4:14.1.0-2) ...
Setting up g++-14-aarch64-linux-gnu (14.2.0-5) ...
Setting up python3 (3.12.6-1+debusine1) ...
Setting up libpython3.12-dev:arm64 (3.12.7-1) ...
Setting up python3-zipp (3.20.2-1) ...
Setting up python3-autocommand (2.2.2-3) ...
Setting up python3-wheel (0.44.0-2) ...
Setting up gcc-14 (14.2.0-5) ...
Setting up python3-decorator (5.1.1-5) ...
Setting up python3-packaging (24.1-1) ...
Setting up python3-pyproject-hooks (1.1.0-2) ...
Setting up libpython3.13-dev:arm64 (3.13.0~rc3-1) ...
Setting up python3.12-dev (3.12.7-1) ...
Setting up python3-typing-extensions (4.12.2-2) ...
Setting up python3-toml (0.10.2-1) ...
Setting up python3-installer (0.7.0+dfsg1-3) ...
Setting up python3-pluggy (1.5.0-1) ...
Setting up g++-aarch64-linux-gnu (4:14.1.0-2) ...
Setting up g++-14 (14.2.0-5) ...
Setting up python3-build (1.2.2-1) ...
Setting up python3-more-itertools (10.5.0-1) ...
Setting up python3-iniconfig (1.1.1-2) ...
Setting up libpython3-dev:arm64 (3.12.6-1+debusine1) ...
Setting up python3-jaraco.functools (4.1.0-1) ...
Setting up python3-jaraco.context (6.0.0-1) ...
Setting up libtool (2.4.7-7) ...
Setting up python3.13-dev (3.13.0~rc3-1) ...
Setting up python3-pytest (8.3.3-1) ...
Setting up python3-typeguard (4.3.0-1) ...
Setting up python3-threadpoolctl (3.1.0-1) ...
Setting up python3-all (3.12.6-1+debusine1) ...
Setting up pybind11-dev (2.13.6-1) ...
Setting up gcc (4:14.1.0-2) ...
Setting up dh-autoreconf (20) ...
Setting up python3-inflect (7.3.1-2) ...
Setting up libpython3-all-dev:arm64 (3.12.6-1+debusine1) ...
Setting up python3-dev (3.12.6-1+debusine1) ...
Setting up g++ (4:14.1.0-2) ...
update-alternatives: using /usr/bin/g++ to provide /usr/bin/c++ (c++) in auto mode
Setting up build-essential (12.10) ...
Setting up python3-pybind11 (2.13.6-1) ...
Setting up python3-pkg-resources (74.1.2-2) ...
Setting up python3-all-dev (3.12.6-1+debusine1) ...
Setting up python3-setuptools (74.1.2-2) ...
Setting up python3-joblib (1.3.2-3) ...
Setting up debhelper (13.20) ...
Setting up python3-setuptools-scm (8.1.0-1) ...
Setting up python3-numpy (1:1.26.4+ds-11+bootstrap1) ...
Setting up dh-python (6.20240824) ...
Setting up python3-scipy (1.13.1-5+nocheck1) ...
Setting up pybuild-plugin-pyproject (6.20240824) ...
Setting up python3-sklearn-lib:arm64 (1.4.2+dfsg-6+debusine1) ...
Setting up python3-sklearn (1.4.2+dfsg-6) ...
Setting up sbuild-build-depends-main-dummy (0.invalid.0) ...
Processing triggers for man-db (2.13.0-1) ...
Processing triggers for libc-bin (2.40-3) ...
+------------------------------------------------------------------------------+
| Check architectures |
+------------------------------------------------------------------------------+
Arch check ok (arm64 included in any)
+------------------------------------------------------------------------------+
| Build environment |
+------------------------------------------------------------------------------+
Kernel: Linux 6.1.0-25-cloud-arm64 #1 SMP Debian 6.1.106-3 (2024-08-26) arm64 (aarch64)
Toolchain package versions: binutils_2.43.1-5 dpkg-dev_1.22.11 g++-14_14.2.0-5 gcc-14_14.2.0-5 libc6-dev_2.40-3 libstdc++-14-dev_14.2.0-5 libstdc++6_14.2.0-5 linux-libc-dev_6.10.12-1
Package versions: appstream_1.0.3-1 apt_2.9.8 autoconf_2.72-3 automake_1:1.16.5-1.3 autopoint_0.22.5-2 autotools-dev_20220109.1 base-files_13.5 base-passwd_3.6.4 bash_5.2.32-1+b1 binutils_2.43.1-5 binutils-aarch64-linux-gnu_2.43.1-5 binutils-common_2.43.1-5 bsdextrautils_2.40.2-9 bsdutils_1:2.40.2-9 build-essential_12.10 bzip2_1.0.8-6 ca-certificates_20240203 coreutils_9.4-3.1 cpp_4:14.1.0-2 cpp-14_14.2.0-5 cpp-14-aarch64-linux-gnu_14.2.0-5 cpp-aarch64-linux-gnu_4:14.1.0-2 dash_0.5.12-9 debconf_1.5.87 debhelper_13.20 debian-archive-keyring_2023.4 debianutils_5.20 dh-autoreconf_20 dh-python_6.20240824 dh-strip-nondeterminism_1.14.0-1 diffstat_1.66-1 diffutils_1:3.10-1 dpkg_1.22.11 dpkg-dev_1.22.11 dumb-init_1.2.5-3 dwz_0.15-1+b1 e2fsprogs_1.47.1-1 fakeroot_1.36-1 file_1:5.45-3 findutils_4.10.0-3 g++_4:14.1.0-2 g++-14_14.2.0-5 g++-14-aarch64-linux-gnu_14.2.0-5 g++-aarch64-linux-gnu_4:14.1.0-2 gcc_4:14.1.0-2 gcc-14_14.2.0-5 gcc-14-aarch64-linux-gnu_14.2.0-5 gcc-14-base_14.2.0-5 gcc-aarch64-linux-gnu_4:14.1.0-2 gettext_0.22.5-2 gettext-base_0.22.5-2 gpg_2.2.44-1 gpgconf_2.2.44-1 gpgv_2.2.44-1 grep_3.11-4 groff-base_1.23.0-5 gzip_1.12-1.1 hostname_3.23+nmu2 init-system-helpers_1.67 intltool-debian_0.35.0+20060710.6 iso-codes_4.17.0-1 libacl1_2.3.2-2 libaliased-perl_0.34-3 libappstream5_1.0.3-1 libapt-pkg-perl_0.1.40+b5 libapt-pkg6.0t64_2.9.8 libarchive-zip-perl_1.68-1 libasan8_14.2.0-5 libassuan9_3.0.1-2 libatomic1_14.2.0-5 libattr1_1:2.5.2-1 libaudit-common_1:4.0.1-1 libaudit1_1:4.0.1-1 libb-hooks-endofscope-perl_0.28-1 libb-hooks-op-check-perl_0.22-3+b1 libberkeleydb-perl_0.64-2+b3 libbinutils_2.43.1-5 libblas3_3.12.0-3 libblkid1_2.40.2-9 libbrotli1_1.1.0-2+b4 libbsd0_0.12.2-2 libbz2-1.0_1.0.8-6 libc-bin_2.40-3 libc-dev-bin_2.40-3 libc6_2.40-3 libc6-dev_2.40-3 libcap-ng0_0.8.5-2 libcap2_1:2.66-5 libcapture-tiny-perl_0.48-2 libcc1-0_14.2.0-5 libcgi-pm-perl_4.66-1 libclass-data-inheritable-perl_0.08-3 libclass-inspector-perl_1.36-3 libclass-method-modifiers-perl_2.15-1 libclass-xsaccessor-perl_1.19-4+b3 libclone-perl_0.47-1 libcom-err2_1.47.1-1 libconfig-tiny-perl_2.30-1 libconst-fast-perl_0.014-2 libcpanel-json-xs-perl_4.38-1 libcrypt-dev_1:4.4.36-5 libcrypt1_1:4.4.36-5 libctf-nobfd0_2.43.1-5 libctf0_2.43.1-5 libcurl3t64-gnutls_8.10.1-1 libdata-dpath-perl_0.59-1 libdata-messagepack-perl_1.02-1+b3 libdata-optlist-perl_0.114-1 libdata-validate-domain-perl_0.15-1 libdata-validate-ip-perl_0.31-1 libdata-validate-uri-perl_0.07-3 libdb5.3t64_5.3.28+dfsg2-7 libdebconfclient0_0.272 libdebhelper-perl_13.20 libdevel-callchecker-perl_0.009-1 libdevel-size-perl_0.84-1 libdevel-stacktrace-perl_2.0500-1 libdpkg-perl_1.22.11 libdynaloader-functions-perl_0.004-1 libelf1t64_0.191-2 libemail-address-xs-perl_1.05-1+b3 libencode-locale-perl_1.05-3 libexception-class-perl_1.45-1 libexpat1_2.6.3-1 libexpat1-dev_2.6.3-1 libext2fs2t64_1.47.1-1 libfakeroot_1.36-1 libffi8_3.4.6-1 libfile-basedir-perl_0.09-2 libfile-find-rule-perl_0.34-3 libfile-listing-perl_6.16-1 libfile-sharedir-perl_1.118-3 libfile-stripnondeterminism-perl_1.14.0-1 libfont-ttf-perl_1.06-2 libgcc-14-dev_14.2.0-5 libgcc-s1_14.2.0-5 libgcrypt20_1.11.0-6 libgdbm-compat4t64_1.24-2 libgdbm6t64_1.24-2 libgfortran5_14.2.0-5 libglib2.0-0t64_2.82.1-1 libgmp10_2:6.3.0+dfsg-2+b1 libgnutls30t64_3.8.6-2 libgomp1_14.2.0-5 libgpg-error0_1.50-4 libgprofng0_2.43.1-5 libgssapi-krb5-2_1.21.3-3 libhogweed6t64_3.10-1 libhtml-form-perl_6.11-1 libhtml-html5-entities-perl_0.004-3 libhtml-parser-perl_3.83-1 libhtml-tagset-perl_3.24-1 libhtml-tokeparser-simple-perl_3.16-4 libhtml-tree-perl_5.07-3 libhttp-cookies-perl_6.11-1 libhttp-date-perl_6.06-1 libhttp-message-perl_6.46-1 libhttp-negotiate-perl_6.01-2 libhwasan0_14.2.0-5 libicu72_72.1-5 libidn2-0_2.3.7-2 libimport-into-perl_1.002005-2 libio-html-perl_1.004-3 libio-interactive-perl_1.025-1 libio-socket-ssl-perl_2.089-1 libio-string-perl_1.08-4 libipc-run3-perl_0.049-1 libipc-system-simple-perl_1.30-2 libisl23_0.27-1 libiterator-perl_0.03+ds1-2 libiterator-util-perl_0.02+ds1-2 libitm1_14.2.0-5 libjansson4_2.14-2+b2 libjs-jquery_3.6.1+dfsg+~3.5.14-1 libjs-sphinxdoc_7.4.7-3 libjs-underscore_1.13.4~dfsg+~1.11.4-3 libjson-maybexs-perl_1.004008-1 libk5crypto3_1.21.3-3 libkeyutils1_1.6.3-3 libkrb5-3_1.21.3-3 libkrb5support0_1.21.3-3 liblapack3_3.12.0-3 liblbfgsb0_3.0+dfsg.4-1+b1 libldap-2.5-0_2.5.18+dfsg-3 liblist-compare-perl_0.55-2 liblist-someutils-perl_0.59-1 liblist-utilsby-perl_0.12-2 liblsan0_14.2.0-5 liblwp-mediatypes-perl_6.04-2 liblwp-protocol-https-perl_6.14-1 liblz1_1.15~pre1-1 liblz4-1_1.9.4-3 liblzma5_5.6.2-2 liblzo2-2_2.10-3 libmagic-mgc_1:5.45-3 libmagic1t64_1:5.45-3 libmarkdown2_2.2.7-2+b1 libmd0_1.1.0-2 libmldbm-perl_2.05-4 libmodule-implementation-perl_0.09-2 libmodule-runtime-perl_0.016-2 libmoo-perl_2.005005-1 libmoox-aliases-perl_0.001006-2 libmount1_2.40.2-9 libmouse-perl_2.5.11-1 libmpc3_1.3.1-1+b2 libmpfr6_4.2.1-1+b1 libnamespace-clean-perl_0.27-2 libncursesw6_6.5-2 libnet-domain-tld-perl_1.75-4 libnet-http-perl_6.23-1 libnet-ipv6addr-perl_1.02-1 libnet-netmask-perl_2.0002-2 libnet-ssleay-perl_1.94-1+b1 libnetaddr-ip-perl_4.079+dfsg-2+b3 libnettle8t64_3.10-1 libnghttp2-14_1.63.0-1 libnghttp3-9_1.4.0-1 libngtcp2-16_1.6.0-1 libngtcp2-crypto-gnutls8_1.6.0-1 libnsl2_1.3.0-3+b2 libnumber-compare-perl_0.03-3 libp11-kit0_0.25.5-2 libpackage-stash-perl_0.40-1 libpam-modules_1.5.3-7 libpam-modules-bin_1.5.3-7 libpam-runtime_1.5.3-7 libpam0g_1.5.3-7 libparams-classify-perl_0.015-2+b3 libparams-util-perl_1.102-3 libpath-tiny-perl_0.146-1 libpcre2-8-0_10.42-4+b1 libperl5.38t64_5.38.2-5 libperlio-gzip-perl_0.20-1+b3 libperlio-utf8-strict-perl_0.010-1+b2 libpipeline1_1.5.8-1 libproc-processtable-perl_0.636-1+b2 libproc2-0_2:4.0.4-6 libpsl5t64_0.21.2-1.1 libpython3-all-dev_3.12.6-1+debusine1 libpython3-dev_3.12.6-1+debusine1 libpython3-stdlib_3.12.6-1+debusine1 libpython3.12-dev_3.12.7-1 libpython3.12-minimal_3.12.7-1 libpython3.12-stdlib_3.12.7-1 libpython3.12t64_3.12.7-1 libpython3.13_3.13.0~rc3-1 libpython3.13-dev_3.13.0~rc3-1 libpython3.13-minimal_3.13.0~rc3-1 libpython3.13-stdlib_3.13.0~rc3-1 libreadline8t64_8.2-5 libregexp-wildcards-perl_1.05-3 librole-tiny-perl_2.002004-1 librtmp1_2.4+20151223.gitfa8646d.1-2+b4 libsasl2-2_2.1.28+dfsg1-8 libsasl2-modules-db_2.1.28+dfsg1-8 libseccomp2_2.5.5-1+b1 libselinux1_3.7-3 libsemanage-common_3.7-2 libsemanage2_3.7-2 libsepol2_3.7-1 libsereal-decoder-perl_5.004+ds-1+b2 libsereal-encoder-perl_5.004+ds-1+b2 libsframe1_2.43.1-5 libsmartcols1_2.40.2-9 libsort-versions-perl_1.62-3 libsqlite3-0_3.46.1-1 libss2_1.47.1-1 libssh2-1t64_1.11.0-7 libssl3t64_3.3.2-1 libstdc++-14-dev_14.2.0-5 libstdc++6_14.2.0-5 libstemmer0d_2.2.0-4+b1 libstrictures-perl_2.000006-1 libsub-exporter-perl_0.990-1 libsub-exporter-progressive-perl_0.001013-3 libsub-identify-perl_0.14-3+b2 libsub-install-perl_0.929-1 libsub-name-perl_0.27-1+b2 libsub-quote-perl_2.006008-1 libsyntax-keyword-try-perl_0.30-1 libsystemd0_256.6-1 libtasn1-6_4.19.0-3+b2 libterm-readkey-perl_2.38-2+b3 libtext-glob-perl_0.11-3 libtext-levenshteinxs-perl_0.03-5+b3 libtext-markdown-discount-perl_0.16-1+b2 libtext-xslate-perl_3.5.9-2 libtime-duration-perl_1.21-2 libtime-moment-perl_0.44-2+b3 libtimedate-perl_2.3300-2 libtinfo6_6.5-2 libtirpc-common_1.3.4+ds-1.3 libtirpc3t64_1.3.4+ds-1.3 libtool_2.4.7-7 libtry-tiny-perl_0.32-1 libtsan2_14.2.0-5 libubsan1_14.2.0-5 libuchardet0_0.0.8-1+b1 libudev1_256.6-1 libunicode-utf8-perl_0.62-2+b2 libunistring5_1.2-1 liburi-perl_5.29-1 libuuid1_2.40.2-9 libvariable-magic-perl_0.64-1 libwww-mechanize-perl_2.19-1 libwww-perl_6.77-1 libwww-robotrules-perl_6.02-1 libxml-libxml-perl_2.0207+dfsg+really+2.0134-5 libxml-namespacesupport-perl_1.12-2 libxml-sax-base-perl_1.09-3 libxml-sax-perl_1.02+dfsg-3 libxml2_2.12.7+dfsg+really2.9.14-0.1 libxmlb2_0.3.19-1 libxs-parse-keyword-perl_0.46-1 libxxhash0_0.8.2-2+b1 libyaml-0-2_0.2.5-1+b1 libyaml-libyaml-perl_0.902.0+ds-2 libzstd1_1.5.6+dfsg-1 lintian_2.119.0 linux-libc-dev_6.10.12-1 login_1:4.16.0-2+really2.40.2-9 login.defs_1:4.16.0-4 logsave_1.47.1-1 lzop_1.04-2 m4_1.4.19-4 make_4.3-4.1 man-db_2.13.0-1 mawk_1.3.4.20240905-1 media-types_10.1.0 mount_2.40.2-9 ncurses-base_6.5-2 ncurses-bin_6.5-2 netbase_6.4 openssl_3.3.2-1 openssl-provider-legacy_3.3.2-1 passwd_1:4.16.0-4 patch_2.7.6-7 patchutils_0.4.2-1 perl_5.38.2-5 perl-base_5.38.2-5 perl-modules-5.38_5.38.2-5 perl-openssl-defaults_7+b2 plzip_1.11-2 po-debconf_1.0.21+nmu1 procps_2:4.0.4-6 pybind11-dev_2.13.6-1 pybuild-plugin-pyproject_6.20240824 python3_3.12.6-1+debusine1 python3-all_3.12.6-1+debusine1 python3-all-dev_3.12.6-1+debusine1 python3-autocommand_2.2.2-3 python3-build_1.2.2-1 python3-decorator_5.1.1-5 python3-dev_3.12.6-1+debusine1 python3-inflect_7.3.1-2 python3-iniconfig_1.1.1-2 python3-installer_0.7.0+dfsg1-3 python3-jaraco.context_6.0.0-1 python3-jaraco.functools_4.1.0-1 python3-joblib_1.3.2-3 python3-minimal_3.12.6-1+debusine1 python3-more-itertools_10.5.0-1 python3-numpy_1:1.26.4+ds-11+bootstrap1 python3-packaging_24.1-1 python3-pkg-resources_74.1.2-2 python3-pluggy_1.5.0-1 python3-pybind11_2.13.6-1 python3-pyproject-hooks_1.1.0-2 python3-pytest_8.3.3-1 python3-scipy_1.13.1-5+nocheck1 python3-setuptools_74.1.2-2 python3-setuptools-scm_8.1.0-1 python3-sklearn_1.4.2+dfsg-6 python3-sklearn-lib_1.4.2+dfsg-6+debusine1 python3-threadpoolctl_3.1.0-1 python3-toml_0.10.2-1 python3-typeguard_4.3.0-1 python3-typing-extensions_4.12.2-2 python3-wheel_0.44.0-2 python3-zipp_3.20.2-1 python3.12_3.12.7-1 python3.12-dev_3.12.7-1 python3.12-minimal_3.12.7-1 python3.13_3.13.0~rc3-1 python3.13-dev_3.13.0~rc3-1 python3.13-minimal_3.13.0~rc3-1 readline-common_8.2-5 rpcsvc-proto_1.4.3-1 sbuild-build-depends-main-dummy_0.invalid.0 sed_4.9-2 sensible-utils_0.0.24 shared-mime-info_2.4-5 sysvinit-utils_3.10-2 t1utils_1.41-4 tar_1.35+dfsg-3 tzdata_2024b-1 ucf_3.0043+nmu1 unzip_6.0-28 util-linux_2.40.2-9 xz-utils_5.6.2-2 zlib1g_1:1.3.dfsg+really1.3.1-1 zlib1g-dev_1:1.3.dfsg+really1.3.1-1
+------------------------------------------------------------------------------+
| Build |
+------------------------------------------------------------------------------+
Unpack source
-------------
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA512
Format: 3.0 (quilt)
Source: python-hmmlearn
Binary: python3-hmmlearn
Architecture: any
Version: 0.3.0-5
Maintainer: Debian Med Packaging Team <debian-med-packaging@lists.alioth.debian.org>
Uploaders: Andreas Tille <tille@debian.org>
Homepage: https://github.com/hmmlearn/hmmlearn
Standards-Version: 4.6.2
Vcs-Browser: https://salsa.debian.org/med-team/python-hmmlearn
Vcs-Git: https://salsa.debian.org/med-team/python-hmmlearn.git
Testsuite: autopkgtest-pkg-pybuild
Build-Depends: debhelper-compat (= 13), dh-sequence-python3, pybuild-plugin-pyproject, python3-setuptools, python3-setuptools-scm, python3-all-dev, python3-pybind11, python3-pytest <!nocheck>, python3-numpy <!nocheck>, python3-sklearn <!nocheck>
Package-List:
python3-hmmlearn deb python optional arch=any
Checksums-Sha1:
1d4f38a57ce245de411f779546b2757f667ce7e4 74561 python-hmmlearn_0.3.0.orig.tar.gz
5a326c0035ca5175dd8b5179125223fbb363a463 5212 python-hmmlearn_0.3.0-5.debian.tar.xz
Checksums-Sha256:
128353bb361079254d0ee052274d40003f78b7db0986f3e417d184fcb71a7b95 74561 python-hmmlearn_0.3.0.orig.tar.gz
9e1b538bbde88db415735b2c73524ab61e8b02c6fe0b24bd2f397a2955f56bc9 5212 python-hmmlearn_0.3.0-5.debian.tar.xz
Files:
97922061f6fe7f588b7ee962eb18e37e 74561 python-hmmlearn_0.3.0.orig.tar.gz
4efd08e72d67c51c8f03b655fdffbca3 5212 python-hmmlearn_0.3.0-5.debian.tar.xz
-----BEGIN PGP SIGNATURE-----
iQIzBAEBCgAdFiEEck1gkzcRPHEFUNdHPCZ2P2xn5uIFAmYqnjcACgkQPCZ2P2xn
5uKazw/8Cd1cc/PI3+YrmNu/iSpON0PG65vLWlVBj4krnHsAG55uNkLZv/MC/g53
oOKoIzrk70TrHdXx/umu1TqdAVYMcCC5PYtvsjX0R1njCrocnriXmiz+hlMEm8hh
THZ+958NGmVUi4/20yohEqZNR8tOHwinX+qP0CBOj5QmHVxPvjmxsL1RPMYL/yTN
QOneUwCmkNzaTrRCUcXdWmCHcdlIET2YREnlJQ2OMmE89cH0G3RQzLHA1Q2hB5dl
ZSUMxFnLAikNAH1u200HN66NjGPIt042g/IOv4OOPdp52TPqnzWeu9e4HsT431ip
bJ8bjZC+nd7/Cv99mbA04EuSy0B2xmCSWWnVdWZAzXeinTxOalFgzol6ez72DB0G
J/cmtOTBy38MKb4ULA1dInQTR2BMu6CBjQo0xOiAhXY3E2N0xNFNxtyVnjWs65at
CIIroYYu8HWJ63T5ghrO/ZnMpT7OPYbmeNXrd31gWmsdbCWbcnx6d46VAl9GsEEQ
El/TFykAll/TG3cLmXZGNpz05frAAEeF9UV9WMyAo2E4V/ZglDYQ/6EwgJ+ygMj1
ThaMjxmKmMFrxWMKYiehMPKMUIm7as8zPlOiv51PtHpvOxwpP/Bro1mJ7F1lXZRk
s7tGQJygCpdpqKryp94PDAh7JGJIXIjfFXkHuiZ7qmCMvltwFnw=
=ncTD
-----END PGP SIGNATURE-----
gpgv: Signature made Thu Apr 25 18:17:27 2024 UTC
gpgv: using RSA key 724D609337113C710550D7473C26763F6C67E6E2
gpgv: Can't check signature: No public key
dpkg-source: warning: cannot verify inline signature for ./python-hmmlearn_0.3.0-5.dsc: no acceptable signature found
dpkg-source: info: extracting python-hmmlearn in /<<PKGBUILDDIR>>
dpkg-source: info: unpacking python-hmmlearn_0.3.0.orig.tar.gz
dpkg-source: info: unpacking python-hmmlearn_0.3.0-5.debian.tar.xz
dpkg-source: info: using patch list from debian/patches/series
dpkg-source: info: applying 799352376a16e9d1658cbf00e103af1b74f4c76a.patch
dpkg-source: info: applying 863f4844c2c1ebc59be361ea081309259a1eb842.patch
dpkg-source: info: applying c68714f53109536ce39d108f578f290e19c769fd.patch
dpkg-source: info: applying ba2bab6b731044bbccd0a36024ae6ebe50ce80d7.patch
Check disk space
----------------
Sufficient free space for build
Hack binNMU version
-------------------
Created changelog entry for binNMU version 0.3.0-5+bd1
User Environment
----------------
APT_CONFIG=/var/lib/sbuild/apt.conf
HOME=/sbuild-nonexistent
LANG=en_US.UTF-8
LC_ALL=C.UTF-8
LOGNAME=debusine-worker
OLDPWD=/
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games
PWD=/<<PKGBUILDDIR>>
SHELL=/bin/sh
USER=debusine-worker
dpkg-buildpackage
-----------------
Command: dpkg-buildpackage --sanitize-env -us -uc -mDebusine Rebuild <debusine@example.net> -B -rfakeroot
dpkg-buildpackage: info: source package python-hmmlearn
dpkg-buildpackage: info: source version 0.3.0-5+bd1
dpkg-buildpackage: info: source distribution sid
dpkg-source --before-build .
dpkg-buildpackage: info: host architecture arm64
debian/rules clean
dh clean --buildsystem=pybuild
dh_auto_clean -O--buildsystem=pybuild
dh_autoreconf_clean -O--buildsystem=pybuild
dh_clean -O--buildsystem=pybuild
debian/rules binary-arch
dh binary-arch --buildsystem=pybuild
dh_update_autotools_config -a -O--buildsystem=pybuild
dh_autoreconf -a -O--buildsystem=pybuild
dh_auto_configure -a -O--buildsystem=pybuild
dh_auto_build -a -O--buildsystem=pybuild
I: pybuild plugin_pyproject:129: Building wheel for python3.13 with "build" module
I: pybuild base:311: python3.13 -m build --skip-dependency-check --no-isolation --wheel --outdir /<<PKGBUILDDIR>>/.pybuild/cpython3_3.13_hmmlearn
* Building wheel...
WARNING setuptools_scm.pyproject_reading toml section missing 'pyproject.toml does not contain a tool.setuptools_scm section'
Traceback (most recent call last):
File "/usr/lib/python3/dist-packages/setuptools_scm/_integration/pyproject_reading.py", line 36, in read_pyproject
section = defn.get("tool", {})[tool_name]
~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^
KeyError: 'setuptools_scm'
running bdist_wheel
running build
running build_py
creating build
creating build/lib.linux-aarch64-cpython-313
creating build/lib.linux-aarch64-cpython-313/hmmlearn
copying lib/hmmlearn/__init__.py -> build/lib.linux-aarch64-cpython-313/hmmlearn
copying lib/hmmlearn/utils.py -> build/lib.linux-aarch64-cpython-313/hmmlearn
copying lib/hmmlearn/_kl_divergence.py -> build/lib.linux-aarch64-cpython-313/hmmlearn
copying lib/hmmlearn/_version.py -> build/lib.linux-aarch64-cpython-313/hmmlearn
copying lib/hmmlearn/hmm.py -> build/lib.linux-aarch64-cpython-313/hmmlearn
copying lib/hmmlearn/_emissions.py -> build/lib.linux-aarch64-cpython-313/hmmlearn
copying lib/hmmlearn/vhmm.py -> build/lib.linux-aarch64-cpython-313/hmmlearn
copying lib/hmmlearn/stats.py -> build/lib.linux-aarch64-cpython-313/hmmlearn
copying lib/hmmlearn/_utils.py -> build/lib.linux-aarch64-cpython-313/hmmlearn
copying lib/hmmlearn/base.py -> build/lib.linux-aarch64-cpython-313/hmmlearn
creating build/lib.linux-aarch64-cpython-313/hmmlearn/tests
copying lib/hmmlearn/tests/test_categorical_hmm.py -> build/lib.linux-aarch64-cpython-313/hmmlearn/tests
copying lib/hmmlearn/tests/__init__.py -> build/lib.linux-aarch64-cpython-313/hmmlearn/tests
copying lib/hmmlearn/tests/test_base.py -> build/lib.linux-aarch64-cpython-313/hmmlearn/tests
copying lib/hmmlearn/tests/conftest.py -> build/lib.linux-aarch64-cpython-313/hmmlearn/tests
copying lib/hmmlearn/tests/test_variational_gaussian.py -> build/lib.linux-aarch64-cpython-313/hmmlearn/tests
copying lib/hmmlearn/tests/test_gmm_hmm_multisequence.py -> build/lib.linux-aarch64-cpython-313/hmmlearn/tests
copying lib/hmmlearn/tests/test_gmm_hmm.py -> build/lib.linux-aarch64-cpython-313/hmmlearn/tests
copying lib/hmmlearn/tests/test_gaussian_hmm.py -> build/lib.linux-aarch64-cpython-313/hmmlearn/tests
copying lib/hmmlearn/tests/test_poisson_hmm.py -> build/lib.linux-aarch64-cpython-313/hmmlearn/tests
copying lib/hmmlearn/tests/test_kl_divergence.py -> build/lib.linux-aarch64-cpython-313/hmmlearn/tests
copying lib/hmmlearn/tests/test_gmm_hmm_new.py -> build/lib.linux-aarch64-cpython-313/hmmlearn/tests
copying lib/hmmlearn/tests/test_variational_categorical.py -> build/lib.linux-aarch64-cpython-313/hmmlearn/tests
copying lib/hmmlearn/tests/test_utils.py -> build/lib.linux-aarch64-cpython-313/hmmlearn/tests
copying lib/hmmlearn/tests/test_multinomial_hmm.py -> build/lib.linux-aarch64-cpython-313/hmmlearn/tests
running build_ext
building 'hmmlearn._hmmc' extension
creating build/temp.linux-aarch64-cpython-313
creating build/temp.linux-aarch64-cpython-313/src
aarch64-linux-gnu-g++ -g -O2 -ffile-prefix-map=/<<PKGBUILDDIR>>=. -fstack-protector-strong -fstack-clash-protection -Wformat -Werror=format-security -mbranch-protection=standard -Wdate-time -D_FORTIFY_SOURCE=2 -fPIC -I/usr/lib/python3/dist-packages/pybind11/include -I/usr/include/python3.13 -c src/_hmmc.cpp -o build/temp.linux-aarch64-cpython-313/src/_hmmc.o -fvisibility=hidden -std=c++11
aarch64-linux-gnu-g++ -g -O2 -ffile-prefix-map=/<<PKGBUILDDIR>>=. -fstack-protector-strong -fstack-clash-protection -Wformat -Werror=format-security -mbranch-protection=standard -Wdate-time -D_FORTIFY_SOURCE=2 -shared -Wl,-O1 -Wl,-Bsymbolic-functions -Wl,-z,relro -g -fwrapv -O2 -Wl,-z,relro -g -O2 -ffile-prefix-map=/<<PKGBUILDDIR>>=. -fstack-protector-strong -fstack-clash-protection -Wformat -Werror=format-security -mbranch-protection=standard -Wdate-time -D_FORTIFY_SOURCE=2 build/temp.linux-aarch64-cpython-313/src/_hmmc.o -L/usr/lib/aarch64-linux-gnu -o build/lib.linux-aarch64-cpython-313/hmmlearn/_hmmc.cpython-313-aarch64-linux-gnu.so
installing to build/bdist.linux-aarch64/wheel
running install
running install_lib
creating build/bdist.linux-aarch64
creating build/bdist.linux-aarch64/wheel
creating build/bdist.linux-aarch64/wheel/hmmlearn
copying build/lib.linux-aarch64-cpython-313/hmmlearn/__init__.py -> build/bdist.linux-aarch64/wheel/./hmmlearn
copying build/lib.linux-aarch64-cpython-313/hmmlearn/utils.py -> build/bdist.linux-aarch64/wheel/./hmmlearn
copying build/lib.linux-aarch64-cpython-313/hmmlearn/_kl_divergence.py -> build/bdist.linux-aarch64/wheel/./hmmlearn
copying build/lib.linux-aarch64-cpython-313/hmmlearn/_hmmc.cpython-313-aarch64-linux-gnu.so -> build/bdist.linux-aarch64/wheel/./hmmlearn
creating build/bdist.linux-aarch64/wheel/hmmlearn/tests
copying build/lib.linux-aarch64-cpython-313/hmmlearn/tests/test_categorical_hmm.py -> build/bdist.linux-aarch64/wheel/./hmmlearn/tests
copying build/lib.linux-aarch64-cpython-313/hmmlearn/tests/__init__.py -> build/bdist.linux-aarch64/wheel/./hmmlearn/tests
copying build/lib.linux-aarch64-cpython-313/hmmlearn/tests/test_base.py -> build/bdist.linux-aarch64/wheel/./hmmlearn/tests
copying build/lib.linux-aarch64-cpython-313/hmmlearn/tests/conftest.py -> build/bdist.linux-aarch64/wheel/./hmmlearn/tests
copying build/lib.linux-aarch64-cpython-313/hmmlearn/tests/test_variational_gaussian.py -> build/bdist.linux-aarch64/wheel/./hmmlearn/tests
copying build/lib.linux-aarch64-cpython-313/hmmlearn/tests/test_gmm_hmm_multisequence.py -> build/bdist.linux-aarch64/wheel/./hmmlearn/tests
copying build/lib.linux-aarch64-cpython-313/hmmlearn/tests/test_gmm_hmm.py -> build/bdist.linux-aarch64/wheel/./hmmlearn/tests
copying build/lib.linux-aarch64-cpython-313/hmmlearn/tests/test_gaussian_hmm.py -> build/bdist.linux-aarch64/wheel/./hmmlearn/tests
copying build/lib.linux-aarch64-cpython-313/hmmlearn/tests/test_poisson_hmm.py -> build/bdist.linux-aarch64/wheel/./hmmlearn/tests
copying build/lib.linux-aarch64-cpython-313/hmmlearn/tests/test_kl_divergence.py -> build/bdist.linux-aarch64/wheel/./hmmlearn/tests
copying build/lib.linux-aarch64-cpython-313/hmmlearn/tests/test_gmm_hmm_new.py -> build/bdist.linux-aarch64/wheel/./hmmlearn/tests
copying build/lib.linux-aarch64-cpython-313/hmmlearn/tests/test_variational_categorical.py -> build/bdist.linux-aarch64/wheel/./hmmlearn/tests
copying build/lib.linux-aarch64-cpython-313/hmmlearn/tests/test_utils.py -> build/bdist.linux-aarch64/wheel/./hmmlearn/tests
copying build/lib.linux-aarch64-cpython-313/hmmlearn/tests/test_multinomial_hmm.py -> build/bdist.linux-aarch64/wheel/./hmmlearn/tests
copying build/lib.linux-aarch64-cpython-313/hmmlearn/_version.py -> build/bdist.linux-aarch64/wheel/./hmmlearn
copying build/lib.linux-aarch64-cpython-313/hmmlearn/hmm.py -> build/bdist.linux-aarch64/wheel/./hmmlearn
copying build/lib.linux-aarch64-cpython-313/hmmlearn/_emissions.py -> build/bdist.linux-aarch64/wheel/./hmmlearn
copying build/lib.linux-aarch64-cpython-313/hmmlearn/vhmm.py -> build/bdist.linux-aarch64/wheel/./hmmlearn
copying build/lib.linux-aarch64-cpython-313/hmmlearn/stats.py -> build/bdist.linux-aarch64/wheel/./hmmlearn
copying build/lib.linux-aarch64-cpython-313/hmmlearn/_utils.py -> build/bdist.linux-aarch64/wheel/./hmmlearn
copying build/lib.linux-aarch64-cpython-313/hmmlearn/base.py -> build/bdist.linux-aarch64/wheel/./hmmlearn
running install_egg_info
running egg_info
creating lib/hmmlearn.egg-info
writing lib/hmmlearn.egg-info/PKG-INFO
writing dependency_links to lib/hmmlearn.egg-info/dependency_links.txt
writing requirements to lib/hmmlearn.egg-info/requires.txt
writing top-level names to lib/hmmlearn.egg-info/top_level.txt
writing manifest file 'lib/hmmlearn.egg-info/SOURCES.txt'
reading manifest file 'lib/hmmlearn.egg-info/SOURCES.txt'
adding license file 'LICENSE.txt'
adding license file 'AUTHORS.rst'
writing manifest file 'lib/hmmlearn.egg-info/SOURCES.txt'
Copying lib/hmmlearn.egg-info to build/bdist.linux-aarch64/wheel/./hmmlearn-0.3.0.egg-info
running install_scripts
creating build/bdist.linux-aarch64/wheel/hmmlearn-0.3.0.dist-info/WHEEL
creating '/<<PKGBUILDDIR>>/.pybuild/cpython3_3.13_hmmlearn/.tmp-7iokl7yo/hmmlearn-0.3.0-cp313-cp313-linux_aarch64.whl' and adding 'build/bdist.linux-aarch64/wheel' to it
adding 'hmmlearn/__init__.py'
adding 'hmmlearn/_emissions.py'
adding 'hmmlearn/_hmmc.cpython-313-aarch64-linux-gnu.so'
adding 'hmmlearn/_kl_divergence.py'
adding 'hmmlearn/_utils.py'
adding 'hmmlearn/_version.py'
adding 'hmmlearn/base.py'
adding 'hmmlearn/hmm.py'
adding 'hmmlearn/stats.py'
adding 'hmmlearn/utils.py'
adding 'hmmlearn/vhmm.py'
adding 'hmmlearn/tests/__init__.py'
adding 'hmmlearn/tests/conftest.py'
adding 'hmmlearn/tests/test_base.py'
adding 'hmmlearn/tests/test_categorical_hmm.py'
adding 'hmmlearn/tests/test_gaussian_hmm.py'
adding 'hmmlearn/tests/test_gmm_hmm.py'
adding 'hmmlearn/tests/test_gmm_hmm_multisequence.py'
adding 'hmmlearn/tests/test_gmm_hmm_new.py'
adding 'hmmlearn/tests/test_kl_divergence.py'
adding 'hmmlearn/tests/test_multinomial_hmm.py'
adding 'hmmlearn/tests/test_poisson_hmm.py'
adding 'hmmlearn/tests/test_utils.py'
adding 'hmmlearn/tests/test_variational_categorical.py'
adding 'hmmlearn/tests/test_variational_gaussian.py'
adding 'hmmlearn-0.3.0.dist-info/AUTHORS.rst'
adding 'hmmlearn-0.3.0.dist-info/LICENSE.txt'
adding 'hmmlearn-0.3.0.dist-info/METADATA'
adding 'hmmlearn-0.3.0.dist-info/WHEEL'
adding 'hmmlearn-0.3.0.dist-info/top_level.txt'
adding 'hmmlearn-0.3.0.dist-info/RECORD'
removing build/bdist.linux-aarch64/wheel
Successfully built hmmlearn-0.3.0-cp313-cp313-linux_aarch64.whl
I: pybuild plugin_pyproject:144: Unpacking wheel built for python3.13 with "installer" module
I: pybuild plugin_pyproject:129: Building wheel for python3.12 with "build" module
I: pybuild base:311: python3.12 -m build --skip-dependency-check --no-isolation --wheel --outdir /<<PKGBUILDDIR>>/.pybuild/cpython3_3.12_hmmlearn
* Building wheel...
WARNING setuptools_scm.pyproject_reading toml section missing 'pyproject.toml does not contain a tool.setuptools_scm section'
Traceback (most recent call last):
File "/usr/lib/python3/dist-packages/setuptools_scm/_integration/pyproject_reading.py", line 36, in read_pyproject
section = defn.get("tool", {})[tool_name]
~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^
KeyError: 'setuptools_scm'
running bdist_wheel
running build
running build_py
creating build/lib.linux-aarch64-cpython-312
creating build/lib.linux-aarch64-cpython-312/hmmlearn
copying lib/hmmlearn/__init__.py -> build/lib.linux-aarch64-cpython-312/hmmlearn
copying lib/hmmlearn/utils.py -> build/lib.linux-aarch64-cpython-312/hmmlearn
copying lib/hmmlearn/_kl_divergence.py -> build/lib.linux-aarch64-cpython-312/hmmlearn
copying lib/hmmlearn/_version.py -> build/lib.linux-aarch64-cpython-312/hmmlearn
copying lib/hmmlearn/hmm.py -> build/lib.linux-aarch64-cpython-312/hmmlearn
copying lib/hmmlearn/_emissions.py -> build/lib.linux-aarch64-cpython-312/hmmlearn
copying lib/hmmlearn/vhmm.py -> build/lib.linux-aarch64-cpython-312/hmmlearn
copying lib/hmmlearn/stats.py -> build/lib.linux-aarch64-cpython-312/hmmlearn
copying lib/hmmlearn/_utils.py -> build/lib.linux-aarch64-cpython-312/hmmlearn
copying lib/hmmlearn/base.py -> build/lib.linux-aarch64-cpython-312/hmmlearn
creating build/lib.linux-aarch64-cpython-312/hmmlearn/tests
copying lib/hmmlearn/tests/test_categorical_hmm.py -> build/lib.linux-aarch64-cpython-312/hmmlearn/tests
copying lib/hmmlearn/tests/__init__.py -> build/lib.linux-aarch64-cpython-312/hmmlearn/tests
copying lib/hmmlearn/tests/test_base.py -> build/lib.linux-aarch64-cpython-312/hmmlearn/tests
copying lib/hmmlearn/tests/conftest.py -> build/lib.linux-aarch64-cpython-312/hmmlearn/tests
copying lib/hmmlearn/tests/test_variational_gaussian.py -> build/lib.linux-aarch64-cpython-312/hmmlearn/tests
copying lib/hmmlearn/tests/test_gmm_hmm_multisequence.py -> build/lib.linux-aarch64-cpython-312/hmmlearn/tests
copying lib/hmmlearn/tests/test_gmm_hmm.py -> build/lib.linux-aarch64-cpython-312/hmmlearn/tests
copying lib/hmmlearn/tests/test_gaussian_hmm.py -> build/lib.linux-aarch64-cpython-312/hmmlearn/tests
copying lib/hmmlearn/tests/test_poisson_hmm.py -> build/lib.linux-aarch64-cpython-312/hmmlearn/tests
copying lib/hmmlearn/tests/test_kl_divergence.py -> build/lib.linux-aarch64-cpython-312/hmmlearn/tests
copying lib/hmmlearn/tests/test_gmm_hmm_new.py -> build/lib.linux-aarch64-cpython-312/hmmlearn/tests
copying lib/hmmlearn/tests/test_variational_categorical.py -> build/lib.linux-aarch64-cpython-312/hmmlearn/tests
copying lib/hmmlearn/tests/test_utils.py -> build/lib.linux-aarch64-cpython-312/hmmlearn/tests
copying lib/hmmlearn/tests/test_multinomial_hmm.py -> build/lib.linux-aarch64-cpython-312/hmmlearn/tests
running build_ext
building 'hmmlearn._hmmc' extension
creating build/temp.linux-aarch64-cpython-312
creating build/temp.linux-aarch64-cpython-312/src
aarch64-linux-gnu-g++ -g -O2 -ffile-prefix-map=/<<PKGBUILDDIR>>=. -fstack-protector-strong -fstack-clash-protection -Wformat -Werror=format-security -mbranch-protection=standard -Wdate-time -D_FORTIFY_SOURCE=2 -fPIC -I/usr/lib/python3/dist-packages/pybind11/include -I/usr/include/python3.12 -c src/_hmmc.cpp -o build/temp.linux-aarch64-cpython-312/src/_hmmc.o -fvisibility=hidden -std=c++11
aarch64-linux-gnu-g++ -g -O2 -ffile-prefix-map=/<<PKGBUILDDIR>>=. -fstack-protector-strong -fstack-clash-protection -Wformat -Werror=format-security -mbranch-protection=standard -Wdate-time -D_FORTIFY_SOURCE=2 -shared -Wl,-O1 -Wl,-Bsymbolic-functions -Wl,-z,relro -g -fwrapv -O2 -Wl,-z,relro -g -O2 -ffile-prefix-map=/<<PKGBUILDDIR>>=. -fstack-protector-strong -fstack-clash-protection -Wformat -Werror=format-security -mbranch-protection=standard -Wdate-time -D_FORTIFY_SOURCE=2 build/temp.linux-aarch64-cpython-312/src/_hmmc.o -L/usr/lib/aarch64-linux-gnu -o build/lib.linux-aarch64-cpython-312/hmmlearn/_hmmc.cpython-312-aarch64-linux-gnu.so
installing to build/bdist.linux-aarch64/wheel
running install
running install_lib
creating build/bdist.linux-aarch64/wheel
creating build/bdist.linux-aarch64/wheel/hmmlearn
copying build/lib.linux-aarch64-cpython-312/hmmlearn/__init__.py -> build/bdist.linux-aarch64/wheel/./hmmlearn
copying build/lib.linux-aarch64-cpython-312/hmmlearn/utils.py -> build/bdist.linux-aarch64/wheel/./hmmlearn
copying build/lib.linux-aarch64-cpython-312/hmmlearn/_kl_divergence.py -> build/bdist.linux-aarch64/wheel/./hmmlearn
creating build/bdist.linux-aarch64/wheel/hmmlearn/tests
copying build/lib.linux-aarch64-cpython-312/hmmlearn/tests/test_categorical_hmm.py -> build/bdist.linux-aarch64/wheel/./hmmlearn/tests
copying build/lib.linux-aarch64-cpython-312/hmmlearn/tests/__init__.py -> build/bdist.linux-aarch64/wheel/./hmmlearn/tests
copying build/lib.linux-aarch64-cpython-312/hmmlearn/tests/test_base.py -> build/bdist.linux-aarch64/wheel/./hmmlearn/tests
copying build/lib.linux-aarch64-cpython-312/hmmlearn/tests/conftest.py -> build/bdist.linux-aarch64/wheel/./hmmlearn/tests
copying build/lib.linux-aarch64-cpython-312/hmmlearn/tests/test_variational_gaussian.py -> build/bdist.linux-aarch64/wheel/./hmmlearn/tests
copying build/lib.linux-aarch64-cpython-312/hmmlearn/tests/test_gmm_hmm_multisequence.py -> build/bdist.linux-aarch64/wheel/./hmmlearn/tests
copying build/lib.linux-aarch64-cpython-312/hmmlearn/tests/test_gmm_hmm.py -> build/bdist.linux-aarch64/wheel/./hmmlearn/tests
copying build/lib.linux-aarch64-cpython-312/hmmlearn/tests/test_gaussian_hmm.py -> build/bdist.linux-aarch64/wheel/./hmmlearn/tests
copying build/lib.linux-aarch64-cpython-312/hmmlearn/tests/test_poisson_hmm.py -> build/bdist.linux-aarch64/wheel/./hmmlearn/tests
copying build/lib.linux-aarch64-cpython-312/hmmlearn/tests/test_kl_divergence.py -> build/bdist.linux-aarch64/wheel/./hmmlearn/tests
copying build/lib.linux-aarch64-cpython-312/hmmlearn/tests/test_gmm_hmm_new.py -> build/bdist.linux-aarch64/wheel/./hmmlearn/tests
copying build/lib.linux-aarch64-cpython-312/hmmlearn/tests/test_variational_categorical.py -> build/bdist.linux-aarch64/wheel/./hmmlearn/tests
copying build/lib.linux-aarch64-cpython-312/hmmlearn/tests/test_utils.py -> build/bdist.linux-aarch64/wheel/./hmmlearn/tests
copying build/lib.linux-aarch64-cpython-312/hmmlearn/tests/test_multinomial_hmm.py -> build/bdist.linux-aarch64/wheel/./hmmlearn/tests
copying build/lib.linux-aarch64-cpython-312/hmmlearn/_version.py -> build/bdist.linux-aarch64/wheel/./hmmlearn
copying build/lib.linux-aarch64-cpython-312/hmmlearn/hmm.py -> build/bdist.linux-aarch64/wheel/./hmmlearn
copying build/lib.linux-aarch64-cpython-312/hmmlearn/_emissions.py -> build/bdist.linux-aarch64/wheel/./hmmlearn
copying build/lib.linux-aarch64-cpython-312/hmmlearn/vhmm.py -> build/bdist.linux-aarch64/wheel/./hmmlearn
copying build/lib.linux-aarch64-cpython-312/hmmlearn/stats.py -> build/bdist.linux-aarch64/wheel/./hmmlearn
copying build/lib.linux-aarch64-cpython-312/hmmlearn/_hmmc.cpython-312-aarch64-linux-gnu.so -> build/bdist.linux-aarch64/wheel/./hmmlearn
copying build/lib.linux-aarch64-cpython-312/hmmlearn/_utils.py -> build/bdist.linux-aarch64/wheel/./hmmlearn
copying build/lib.linux-aarch64-cpython-312/hmmlearn/base.py -> build/bdist.linux-aarch64/wheel/./hmmlearn
running install_egg_info
running egg_info
writing lib/hmmlearn.egg-info/PKG-INFO
writing dependency_links to lib/hmmlearn.egg-info/dependency_links.txt
writing requirements to lib/hmmlearn.egg-info/requires.txt
writing top-level names to lib/hmmlearn.egg-info/top_level.txt
reading manifest file 'lib/hmmlearn.egg-info/SOURCES.txt'
adding license file 'LICENSE.txt'
adding license file 'AUTHORS.rst'
writing manifest file 'lib/hmmlearn.egg-info/SOURCES.txt'
Copying lib/hmmlearn.egg-info to build/bdist.linux-aarch64/wheel/./hmmlearn-0.3.0.egg-info
running install_scripts
creating build/bdist.linux-aarch64/wheel/hmmlearn-0.3.0.dist-info/WHEEL
creating '/<<PKGBUILDDIR>>/.pybuild/cpython3_3.12_hmmlearn/.tmp-l31y8yp7/hmmlearn-0.3.0-cp312-cp312-linux_aarch64.whl' and adding 'build/bdist.linux-aarch64/wheel' to it
adding 'hmmlearn/__init__.py'
adding 'hmmlearn/_emissions.py'
adding 'hmmlearn/_hmmc.cpython-312-aarch64-linux-gnu.so'
adding 'hmmlearn/_kl_divergence.py'
adding 'hmmlearn/_utils.py'
adding 'hmmlearn/_version.py'
adding 'hmmlearn/base.py'
adding 'hmmlearn/hmm.py'
adding 'hmmlearn/stats.py'
adding 'hmmlearn/utils.py'
adding 'hmmlearn/vhmm.py'
adding 'hmmlearn/tests/__init__.py'
adding 'hmmlearn/tests/conftest.py'
adding 'hmmlearn/tests/test_base.py'
adding 'hmmlearn/tests/test_categorical_hmm.py'
adding 'hmmlearn/tests/test_gaussian_hmm.py'
adding 'hmmlearn/tests/test_gmm_hmm.py'
adding 'hmmlearn/tests/test_gmm_hmm_multisequence.py'
adding 'hmmlearn/tests/test_gmm_hmm_new.py'
adding 'hmmlearn/tests/test_kl_divergence.py'
adding 'hmmlearn/tests/test_multinomial_hmm.py'
adding 'hmmlearn/tests/test_poisson_hmm.py'
adding 'hmmlearn/tests/test_utils.py'
adding 'hmmlearn/tests/test_variational_categorical.py'
adding 'hmmlearn/tests/test_variational_gaussian.py'
adding 'hmmlearn-0.3.0.dist-info/AUTHORS.rst'
adding 'hmmlearn-0.3.0.dist-info/LICENSE.txt'
adding 'hmmlearn-0.3.0.dist-info/METADATA'
adding 'hmmlearn-0.3.0.dist-info/WHEEL'
adding 'hmmlearn-0.3.0.dist-info/top_level.txt'
adding 'hmmlearn-0.3.0.dist-info/RECORD'
removing build/bdist.linux-aarch64/wheel
Successfully built hmmlearn-0.3.0-cp312-cp312-linux_aarch64.whl
I: pybuild plugin_pyproject:144: Unpacking wheel built for python3.12 with "installer" module
dh_auto_test -a -O--buildsystem=pybuild
I: pybuild base:311: cd /<<PKGBUILDDIR>>/.pybuild/cpython3_3.13_hmmlearn/build; python3.13 -m pytest --pyargs hmmlearn
set RNG seed to 1627838051
============================= test session starts ==============================
platform linux -- Python 3.13.0rc3, pytest-8.3.3, pluggy-1.5.0
rootdir: /<<PKGBUILDDIR>>
configfile: setup.cfg
plugins: typeguard-4.3.0
collected 320 items
hmmlearn/tests/test_base.py .....FFF.FF.FF.. [ 5%]
hmmlearn/tests/test_categorical_hmm.py FFFFFFFF..FF..FFFFFF.. [ 11%]
hmmlearn/tests/test_gaussian_hmm.py ..FF..FFFFFF..FFFFFFFF..FF.F..FF..FF [ 23%]
FFFF..FFFFFFFF..FF..FF..FFFFFF..FFFFFFFF..FF..FFFFFF..FFFFFFFF [ 42%]
hmmlearn/tests/test_gmm_hmm.py xxxxxxxxxxxxxxxxxx [ 48%]
hmmlearn/tests/test_gmm_hmm_multisequence.py FFFFFFFF [ 50%]
hmmlearn/tests/test_gmm_hmm_new.py ........FFFFFFxxFF........FFFFFFxxFF. [ 62%]
.......FFFFFFxxFF........FFFFFFxxFFFFFFFF [ 75%]
hmmlearn/tests/test_kl_divergence.py ..... [ 76%]
hmmlearn/tests/test_multinomial_hmm.py ..FF..FFFFFF..FF [ 81%]
hmmlearn/tests/test_poisson_hmm.py ..FFFFFFFFFF [ 85%]
hmmlearn/tests/test_utils.py ... [ 86%]
hmmlearn/tests/test_variational_categorical.py FFFFFFFFFFFF [ 90%]
hmmlearn/tests/test_variational_gaussian.py FFFFFFFFFFFFFFFFFFFFFFFFFFFF [ 98%]
FFFF [100%]
=================================== FAILURES ===================================
____________ TestBaseAgainstWikipedia.test_do_forward_scaling_pass _____________
self = <hmmlearn.tests.test_base.TestBaseAgainstWikipedia object at 0xffff965287d0>
def test_do_forward_scaling_pass(self):
> log_prob, fwdlattice, scaling_factors = _hmmc.forward_scaling(
self.hmm.startprob_, self.hmm.transmat_, self.frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/tests/test_base.py:79: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
________________ TestBaseAgainstWikipedia.test_do_forward_pass _________________
self = <hmmlearn.tests.test_base.TestBaseAgainstWikipedia object at 0xffff96528910>
def test_do_forward_pass(self):
> log_prob, fwdlattice = _hmmc.forward_log(
self.hmm.startprob_, self.hmm.transmat_, self.log_frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/tests/test_base.py:91: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
____________ TestBaseAgainstWikipedia.test_do_backward_scaling_pass ____________
self = <hmmlearn.tests.test_base.TestBaseAgainstWikipedia object at 0xffff96555e00>
def test_do_backward_scaling_pass(self):
> log_prob, fwdlattice, scaling_factors = _hmmc.forward_scaling(
self.hmm.startprob_, self.hmm.transmat_, self.frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/tests/test_base.py:104: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
________________ TestBaseAgainstWikipedia.test_do_viterbi_pass _________________
self = <hmmlearn.tests.test_base.TestBaseAgainstWikipedia object at 0xffff965fd910>
def test_do_viterbi_pass(self):
> log_prob, state_sequence = _hmmc.viterbi(
self.hmm.startprob_, self.hmm.transmat_, self.log_frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/tests/test_base.py:129: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
_________________ TestBaseAgainstWikipedia.test_score_samples __________________
self = <hmmlearn.tests.test_base.TestBaseAgainstWikipedia object at 0xffff967dbbd0>
def test_score_samples(self):
# ``StubHMM` ignores the values in ```X``, so we just pass in an
# array of the appropriate shape.
> log_prob, posteriors = self.hmm.score_samples(self.log_frameprob)
hmmlearn/tests/test_base.py:139:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:201: in score_samples
return self._score(X, lengths, compute_posteriors=True)
hmmlearn/base.py:244: in _score
return impl(
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = StubHMM(n_components=2)
X = array([[-0.10536052, -1.60943791],
[-0.10536052, -1.60943791],
[-2.30258509, -0.22314355],
[-0.10536052, -1.60943791],
[-0.10536052, -1.60943791]])
lengths = None
def _score_log(self, X, lengths=None, *, compute_posteriors):
"""
Compute the log probability under the model, as well as posteriors if
*compute_posteriors* is True (otherwise, an empty array is returned
for the latter).
"""
log_prob = 0
sub_posteriors = [np.empty((0, self.n_components))]
for sub_X in _utils.split_X_lengths(X, lengths):
log_frameprob = self._compute_log_likelihood(sub_X)
> log_probij, fwdlattice = _hmmc.forward_log(
self.startprob_, self.transmat_, log_frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:257: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
_________________ TestBaseConsistentWithGMM.test_score_samples _________________
self = <hmmlearn.tests.test_base.TestBaseConsistentWithGMM object at 0xffff96528a50>
def test_score_samples(self):
> log_prob, hmmposteriors = self.hmm.score_samples(self.log_frameprob)
hmmlearn/tests/test_base.py:177:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:201: in score_samples
return self._score(X, lengths, compute_posteriors=True)
hmmlearn/base.py:244: in _score
return impl(
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = StubHMM(n_components=8)
X = array([[-2.15491793e+00, -4.40617096e-01, -4.24390981e+00,
-1.65786991e+00, -8.68680448e-01, -1.45516199e-01,
...-1.14361252e+00,
-1.51859082e+00, -1.94643460e-01, -4.39633607e-01,
-1.85708978e+00, -2.98226086e-02]])
lengths = None
def _score_log(self, X, lengths=None, *, compute_posteriors):
"""
Compute the log probability under the model, as well as posteriors if
*compute_posteriors* is True (otherwise, an empty array is returned
for the latter).
"""
log_prob = 0
sub_posteriors = [np.empty((0, self.n_components))]
for sub_X in _utils.split_X_lengths(X, lengths):
log_frameprob = self._compute_log_likelihood(sub_X)
> log_probij, fwdlattice = _hmmc.forward_log(
self.startprob_, self.transmat_, log_frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:257: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
____________________ TestBaseConsistentWithGMM.test_decode _____________________
self = <hmmlearn.tests.test_base.TestBaseConsistentWithGMM object at 0xffff96528b90>
def test_decode(self):
> _log_prob, state_sequence = self.hmm.decode(self.log_frameprob)
hmmlearn/tests/test_base.py:188:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:340: in decode
sub_log_prob, sub_state_sequence = decoder(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = StubHMM(n_components=8)
X = array([[-5.20626659e-01, -2.38496518e-01, -7.70916159e-01,
-5.88119077e-02, -2.90150462e+00, -3.78852360e+00,
...-8.66160399e-02,
-2.24644587e+00, -3.40204885e-02, -3.43843494e-01,
-9.81080152e-02, -1.04909926e-01]])
def _decode_viterbi(self, X):
log_frameprob = self._compute_log_likelihood(X)
> return _hmmc.viterbi(self.startprob_, self.transmat_, log_frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:286: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
_________ TestCategoricalAgainstWikipedia.test_decode_viterbi[scaling] _________
self = <hmmlearn.tests.test_categorical_hmm.TestCategoricalAgainstWikipedia object at 0xffff95489d10>
implementation = 'scaling'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_decode_viterbi(self, implementation):
# From http://en.wikipedia.org/wiki/Viterbi_algorithm:
# "This reveals that the observations ['walk', 'shop', 'clean']
# were most likely generated by states ['Sunny', 'Rainy', 'Rainy'],
# with probability 0.01344."
h = self.new_hmm(implementation)
X = [[0], [1], [2]]
> log_prob, state_sequence = h.decode(X, algorithm="viterbi")
hmmlearn/tests/test_categorical_hmm.py:37:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/_emissions.py:27: in <lambda>
return functools.wraps(func)(lambda *args, **kwargs: func(*args, **kwargs))
hmmlearn/base.py:340: in decode
sub_log_prob, sub_state_sequence = decoder(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = CategoricalHMM(implementation='scaling', n_components=2, n_features=3)
X = array([[0],
[1],
[2]])
def _decode_viterbi(self, X):
log_frameprob = self._compute_log_likelihood(X)
> return _hmmc.viterbi(self.startprob_, self.transmat_, log_frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:286: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
___________ TestCategoricalAgainstWikipedia.test_decode_viterbi[log] ___________
self = <hmmlearn.tests.test_categorical_hmm.TestCategoricalAgainstWikipedia object at 0xffff95489e50>
implementation = 'log'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_decode_viterbi(self, implementation):
# From http://en.wikipedia.org/wiki/Viterbi_algorithm:
# "This reveals that the observations ['walk', 'shop', 'clean']
# were most likely generated by states ['Sunny', 'Rainy', 'Rainy'],
# with probability 0.01344."
h = self.new_hmm(implementation)
X = [[0], [1], [2]]
> log_prob, state_sequence = h.decode(X, algorithm="viterbi")
hmmlearn/tests/test_categorical_hmm.py:37:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/_emissions.py:27: in <lambda>
return functools.wraps(func)(lambda *args, **kwargs: func(*args, **kwargs))
hmmlearn/base.py:340: in decode
sub_log_prob, sub_state_sequence = decoder(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = CategoricalHMM(n_components=2, n_features=3)
X = array([[0],
[1],
[2]])
def _decode_viterbi(self, X):
log_frameprob = self._compute_log_likelihood(X)
> return _hmmc.viterbi(self.startprob_, self.transmat_, log_frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:286: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
___________ TestCategoricalAgainstWikipedia.test_decode_map[scaling] ___________
self = <hmmlearn.tests.test_categorical_hmm.TestCategoricalAgainstWikipedia object at 0xffff96556650>
implementation = 'scaling'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_decode_map(self, implementation):
X = [[0], [1], [2]]
h = self.new_hmm(implementation)
> _log_prob, state_sequence = h.decode(X, algorithm="map")
hmmlearn/tests/test_categorical_hmm.py:45:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/_emissions.py:27: in <lambda>
return functools.wraps(func)(lambda *args, **kwargs: func(*args, **kwargs))
hmmlearn/base.py:340: in decode
sub_log_prob, sub_state_sequence = decoder(sub_X)
hmmlearn/base.py:289: in _decode_map
_, posteriors = self.score_samples(X)
hmmlearn/_emissions.py:27: in <lambda>
return functools.wraps(func)(lambda *args, **kwargs: func(*args, **kwargs))
hmmlearn/base.py:201: in score_samples
return self._score(X, lengths, compute_posteriors=True)
hmmlearn/base.py:244: in _score
return impl(
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = CategoricalHMM(implementation='scaling', n_components=2, n_features=3)
X = array([[0],
[1],
[2]]), lengths = None
def _score_scaling(self, X, lengths=None, *, compute_posteriors):
log_prob = 0
sub_posteriors = [np.empty((0, self.n_components))]
for sub_X in _utils.split_X_lengths(X, lengths):
frameprob = self._compute_likelihood(sub_X)
> log_probij, fwdlattice, scaling_factors = _hmmc.forward_scaling(
self.startprob_, self.transmat_, frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:272: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
_____________ TestCategoricalAgainstWikipedia.test_decode_map[log] _____________
self = <hmmlearn.tests.test_categorical_hmm.TestCategoricalAgainstWikipedia object at 0xffff94f543e0>
implementation = 'log'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_decode_map(self, implementation):
X = [[0], [1], [2]]
h = self.new_hmm(implementation)
> _log_prob, state_sequence = h.decode(X, algorithm="map")
hmmlearn/tests/test_categorical_hmm.py:45:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/_emissions.py:27: in <lambda>
return functools.wraps(func)(lambda *args, **kwargs: func(*args, **kwargs))
hmmlearn/base.py:340: in decode
sub_log_prob, sub_state_sequence = decoder(sub_X)
hmmlearn/base.py:289: in _decode_map
_, posteriors = self.score_samples(X)
hmmlearn/_emissions.py:27: in <lambda>
return functools.wraps(func)(lambda *args, **kwargs: func(*args, **kwargs))
hmmlearn/base.py:201: in score_samples
return self._score(X, lengths, compute_posteriors=True)
hmmlearn/base.py:244: in _score
return impl(
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = CategoricalHMM(n_components=2, n_features=3)
X = array([[0],
[1],
[2]]), lengths = None
def _score_log(self, X, lengths=None, *, compute_posteriors):
"""
Compute the log probability under the model, as well as posteriors if
*compute_posteriors* is True (otherwise, an empty array is returned
for the latter).
"""
log_prob = 0
sub_posteriors = [np.empty((0, self.n_components))]
for sub_X in _utils.split_X_lengths(X, lengths):
log_frameprob = self._compute_log_likelihood(sub_X)
> log_probij, fwdlattice = _hmmc.forward_log(
self.startprob_, self.transmat_, log_frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:257: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
____________ TestCategoricalAgainstWikipedia.test_predict[scaling] _____________
self = <hmmlearn.tests.test_categorical_hmm.TestCategoricalAgainstWikipedia object at 0xffff954d2f90>
implementation = 'scaling'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_predict(self, implementation):
X = [[0], [1], [2]]
h = self.new_hmm(implementation)
> state_sequence = h.predict(X)
hmmlearn/tests/test_categorical_hmm.py:52:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/_emissions.py:27: in <lambda>
return functools.wraps(func)(lambda *args, **kwargs: func(*args, **kwargs))
hmmlearn/base.py:363: in predict
_, state_sequence = self.decode(X, lengths)
hmmlearn/_emissions.py:27: in <lambda>
return functools.wraps(func)(lambda *args, **kwargs: func(*args, **kwargs))
hmmlearn/base.py:340: in decode
sub_log_prob, sub_state_sequence = decoder(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = CategoricalHMM(implementation='scaling', n_components=2, n_features=3)
X = array([[0],
[1],
[2]])
def _decode_viterbi(self, X):
log_frameprob = self._compute_log_likelihood(X)
> return _hmmc.viterbi(self.startprob_, self.transmat_, log_frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:286: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
______________ TestCategoricalAgainstWikipedia.test_predict[log] _______________
self = <hmmlearn.tests.test_categorical_hmm.TestCategoricalAgainstWikipedia object at 0xffff967dbdf0>
implementation = 'log'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_predict(self, implementation):
X = [[0], [1], [2]]
h = self.new_hmm(implementation)
> state_sequence = h.predict(X)
hmmlearn/tests/test_categorical_hmm.py:52:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/_emissions.py:27: in <lambda>
return functools.wraps(func)(lambda *args, **kwargs: func(*args, **kwargs))
hmmlearn/base.py:363: in predict
_, state_sequence = self.decode(X, lengths)
hmmlearn/_emissions.py:27: in <lambda>
return functools.wraps(func)(lambda *args, **kwargs: func(*args, **kwargs))
hmmlearn/base.py:340: in decode
sub_log_prob, sub_state_sequence = decoder(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = CategoricalHMM(n_components=2, n_features=3)
X = array([[0],
[1],
[2]])
def _decode_viterbi(self, X):
log_frameprob = self._compute_log_likelihood(X)
> return _hmmc.viterbi(self.startprob_, self.transmat_, log_frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:286: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
_________________ TestCategoricalHMM.test_n_features[scaling] __________________
self = <hmmlearn.tests.test_categorical_hmm.TestCategoricalHMM object at 0xffff9548aad0>
implementation = 'scaling'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_n_features(self, implementation):
sequences, _ = self.new_hmm(implementation).sample(500)
# set n_features
model = hmm.CategoricalHMM(
n_components=2, implementation=implementation)
> assert_log_likelihood_increasing(model, sequences, [500], 10)
hmmlearn/tests/test_categorical_hmm.py:80:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/tests/__init__.py:46: in assert_log_likelihood_increasing
h.fit(X, lengths=lengths)
hmmlearn/_emissions.py:27: in <lambda>
return functools.wraps(func)(lambda *args, **kwargs: func(*args, **kwargs))
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = CategoricalHMM(implementation='scaling', init_params='', n_components=2,
n_features=3, n_iter=1,
random_state=RandomState(MT19937) at 0xFFFF9B17F840)
X = array([[2],
[1],
[2],
[2],
[0],
[1],
[0],
[1],
[2],
[0]... [2],
[2],
[0],
[2],
[1],
[2],
[2],
[1],
[1],
[1]])
def _fit_scaling(self, X):
frameprob = self._compute_likelihood(X)
> log_prob, fwdlattice, scaling_factors = _hmmc.forward_scaling(
self.startprob_, self.transmat_, frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:855: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
___________________ TestCategoricalHMM.test_n_features[log] ____________________
self = <hmmlearn.tests.test_categorical_hmm.TestCategoricalHMM object at 0xffff9548b250>
implementation = 'log'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_n_features(self, implementation):
sequences, _ = self.new_hmm(implementation).sample(500)
# set n_features
model = hmm.CategoricalHMM(
n_components=2, implementation=implementation)
> assert_log_likelihood_increasing(model, sequences, [500], 10)
hmmlearn/tests/test_categorical_hmm.py:80:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/tests/__init__.py:46: in assert_log_likelihood_increasing
h.fit(X, lengths=lengths)
hmmlearn/_emissions.py:27: in <lambda>
return functools.wraps(func)(lambda *args, **kwargs: func(*args, **kwargs))
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = CategoricalHMM(init_params='', n_components=2, n_features=3, n_iter=1,
random_state=RandomState(MT19937) at 0xFFFF9B17F840)
X = array([[1],
[2],
[1],
[0],
[0],
[0],
[0],
[2],
[1],
[0]... [1],
[1],
[0],
[1],
[0],
[1],
[1],
[2],
[0],
[0]])
def _fit_log(self, X):
log_frameprob = self._compute_log_likelihood(X)
> log_prob, fwdlattice = _hmmc.forward_log(
self.startprob_, self.transmat_, log_frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:864: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
________________ TestCategoricalHMM.test_score_samples[scaling] ________________
self = <hmmlearn.tests.test_categorical_hmm.TestCategoricalHMM object at 0xffff954d3ad0>
implementation = 'scaling'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_score_samples(self, implementation):
idx = np.repeat(np.arange(self.n_components), 10)
n_samples = len(idx)
X = np.random.randint(self.n_features, size=(n_samples, 1))
h = self.new_hmm(implementation)
> ll, posteriors = h.score_samples(X)
hmmlearn/tests/test_categorical_hmm.py:110:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/_emissions.py:27: in <lambda>
return functools.wraps(func)(lambda *args, **kwargs: func(*args, **kwargs))
hmmlearn/base.py:201: in score_samples
return self._score(X, lengths, compute_posteriors=True)
hmmlearn/base.py:244: in _score
return impl(
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = CategoricalHMM(implementation='scaling', n_components=2, n_features=3)
X = array([[2],
[1],
[1],
[2],
[1],
[2],
[1],
[1],
[0],
[2],
[0],
[1],
[2],
[1],
[2],
[1],
[2],
[0],
[2],
[1]])
lengths = None
def _score_scaling(self, X, lengths=None, *, compute_posteriors):
log_prob = 0
sub_posteriors = [np.empty((0, self.n_components))]
for sub_X in _utils.split_X_lengths(X, lengths):
frameprob = self._compute_likelihood(sub_X)
> log_probij, fwdlattice, scaling_factors = _hmmc.forward_scaling(
self.startprob_, self.transmat_, frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:272: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
__________________ TestCategoricalHMM.test_score_samples[log] __________________
self = <hmmlearn.tests.test_categorical_hmm.TestCategoricalHMM object at 0xffff94f008d0>
implementation = 'log'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_score_samples(self, implementation):
idx = np.repeat(np.arange(self.n_components), 10)
n_samples = len(idx)
X = np.random.randint(self.n_features, size=(n_samples, 1))
h = self.new_hmm(implementation)
> ll, posteriors = h.score_samples(X)
hmmlearn/tests/test_categorical_hmm.py:110:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/_emissions.py:27: in <lambda>
return functools.wraps(func)(lambda *args, **kwargs: func(*args, **kwargs))
hmmlearn/base.py:201: in score_samples
return self._score(X, lengths, compute_posteriors=True)
hmmlearn/base.py:244: in _score
return impl(
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = CategoricalHMM(n_components=2, n_features=3)
X = array([[0],
[2],
[0],
[2],
[0],
[2],
[2],
[2],
[0],
[2],
[2],
[1],
[2],
[0],
[0],
[2],
[1],
[0],
[2],
[1]])
lengths = None
def _score_log(self, X, lengths=None, *, compute_posteriors):
"""
Compute the log probability under the model, as well as posteriors if
*compute_posteriors* is True (otherwise, an empty array is returned
for the latter).
"""
log_prob = 0
sub_posteriors = [np.empty((0, self.n_components))]
for sub_X in _utils.split_X_lengths(X, lengths):
log_frameprob = self._compute_log_likelihood(sub_X)
> log_probij, fwdlattice = _hmmc.forward_log(
self.startprob_, self.transmat_, log_frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:257: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
_____________________ TestCategoricalHMM.test_fit[scaling] _____________________
self = <hmmlearn.tests.test_categorical_hmm.TestCategoricalHMM object at 0xffff954a6d50>
implementation = 'scaling', params = 'ste', n_iter = 5
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_fit(self, implementation, params='ste', n_iter=5):
h = self.new_hmm(implementation)
h.params = params
lengths = np.array([10] * 10)
X, _state_sequence = h.sample(lengths.sum())
# Mess up the parameters and see if we can re-learn them.
h.startprob_ = normalized(np.random.random(self.n_components))
h.transmat_ = normalized(
np.random.random((self.n_components, self.n_components)),
axis=1)
h.emissionprob_ = normalized(
np.random.random((self.n_components, self.n_features)),
axis=1)
> assert_log_likelihood_increasing(h, X, lengths, n_iter)
hmmlearn/tests/test_categorical_hmm.py:140:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/tests/__init__.py:46: in assert_log_likelihood_increasing
h.fit(X, lengths=lengths)
hmmlearn/_emissions.py:27: in <lambda>
return functools.wraps(func)(lambda *args, **kwargs: func(*args, **kwargs))
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = CategoricalHMM(implementation='scaling', init_params='', n_components=2,
n_features=3, n_iter=1,
random_state=RandomState(MT19937) at 0xFFFF9B17F840)
X = array([[0],
[0],
[1],
[2],
[2],
[2],
[1],
[1],
[0],
[2]])
def _fit_scaling(self, X):
frameprob = self._compute_likelihood(X)
> log_prob, fwdlattice, scaling_factors = _hmmc.forward_scaling(
self.startprob_, self.transmat_, frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:855: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
_______________________ TestCategoricalHMM.test_fit[log] _______________________
self = <hmmlearn.tests.test_categorical_hmm.TestCategoricalHMM object at 0xffff94e99130>
implementation = 'log', params = 'ste', n_iter = 5
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_fit(self, implementation, params='ste', n_iter=5):
h = self.new_hmm(implementation)
h.params = params
lengths = np.array([10] * 10)
X, _state_sequence = h.sample(lengths.sum())
# Mess up the parameters and see if we can re-learn them.
h.startprob_ = normalized(np.random.random(self.n_components))
h.transmat_ = normalized(
np.random.random((self.n_components, self.n_components)),
axis=1)
h.emissionprob_ = normalized(
np.random.random((self.n_components, self.n_features)),
axis=1)
> assert_log_likelihood_increasing(h, X, lengths, n_iter)
hmmlearn/tests/test_categorical_hmm.py:140:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/tests/__init__.py:46: in assert_log_likelihood_increasing
h.fit(X, lengths=lengths)
hmmlearn/_emissions.py:27: in <lambda>
return functools.wraps(func)(lambda *args, **kwargs: func(*args, **kwargs))
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = CategoricalHMM(init_params='', n_components=2, n_features=3, n_iter=1,
random_state=RandomState(MT19937) at 0xFFFF9B17F840)
X = array([[2],
[2],
[2],
[2],
[1],
[0],
[0],
[1],
[2],
[1]])
def _fit_log(self, X):
log_frameprob = self._compute_log_likelihood(X)
> log_prob, fwdlattice = _hmmc.forward_log(
self.startprob_, self.transmat_, log_frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:864: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
______________ TestCategoricalHMM.test_fit_emissionprob[scaling] _______________
self = <hmmlearn.tests.test_categorical_hmm.TestCategoricalHMM object at 0xffff94e99220>
implementation = 'scaling'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_fit_emissionprob(self, implementation):
> self.test_fit(implementation, 'e')
hmmlearn/tests/test_categorical_hmm.py:144:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/tests/test_categorical_hmm.py:140: in test_fit
assert_log_likelihood_increasing(h, X, lengths, n_iter)
hmmlearn/tests/__init__.py:46: in assert_log_likelihood_increasing
h.fit(X, lengths=lengths)
hmmlearn/_emissions.py:27: in <lambda>
return functools.wraps(func)(lambda *args, **kwargs: func(*args, **kwargs))
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = CategoricalHMM(implementation='scaling', init_params='', n_components=2,
n_features=3, n_iter=1, params='e',
random_state=RandomState(MT19937) at 0xFFFF9B17F840)
X = array([[2],
[0],
[1],
[2],
[0],
[0],
[1],
[1],
[1],
[2]])
def _fit_scaling(self, X):
frameprob = self._compute_likelihood(X)
> log_prob, fwdlattice, scaling_factors = _hmmc.forward_scaling(
self.startprob_, self.transmat_, frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:855: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
________________ TestCategoricalHMM.test_fit_emissionprob[log] _________________
self = <hmmlearn.tests.test_categorical_hmm.TestCategoricalHMM object at 0xffff94eef070>
implementation = 'log'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_fit_emissionprob(self, implementation):
> self.test_fit(implementation, 'e')
hmmlearn/tests/test_categorical_hmm.py:144:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/tests/test_categorical_hmm.py:140: in test_fit
assert_log_likelihood_increasing(h, X, lengths, n_iter)
hmmlearn/tests/__init__.py:46: in assert_log_likelihood_increasing
h.fit(X, lengths=lengths)
hmmlearn/_emissions.py:27: in <lambda>
return functools.wraps(func)(lambda *args, **kwargs: func(*args, **kwargs))
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = CategoricalHMM(init_params='', n_components=2, n_features=3, n_iter=1,
params='e', random_state=RandomState(MT19937) at 0xFFFF9B17F840)
X = array([[0],
[2],
[2],
[0],
[2],
[2],
[1],
[2],
[2],
[1]])
def _fit_log(self, X):
log_frameprob = self._compute_log_likelihood(X)
> log_prob, fwdlattice = _hmmc.forward_log(
self.startprob_, self.transmat_, log_frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:864: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
________________ TestCategoricalHMM.test_fit_with_init[scaling] ________________
self = <hmmlearn.tests.test_categorical_hmm.TestCategoricalHMM object at 0xffff94eef5b0>
implementation = 'scaling', params = 'ste', n_iter = 5
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_fit_with_init(self, implementation, params='ste', n_iter=5):
lengths = [10] * 10
h = self.new_hmm(implementation)
X, _state_sequence = h.sample(sum(lengths))
# use init_function to initialize paramerters
h = hmm.CategoricalHMM(self.n_components, params=params,
init_params=params)
h._init(X, lengths)
> assert_log_likelihood_increasing(h, X, lengths, n_iter)
hmmlearn/tests/test_categorical_hmm.py:157:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/tests/__init__.py:46: in assert_log_likelihood_increasing
h.fit(X, lengths=lengths)
hmmlearn/_emissions.py:27: in <lambda>
return functools.wraps(func)(lambda *args, **kwargs: func(*args, **kwargs))
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = CategoricalHMM(init_params='', n_components=2, n_features=3, n_iter=1,
random_state=RandomState(MT19937) at 0xFFFF9B17F840)
X = array([[2],
[2],
[1],
[1],
[2],
[2],
[1],
[2],
[2],
[1]])
def _fit_log(self, X):
log_frameprob = self._compute_log_likelihood(X)
> log_prob, fwdlattice = _hmmc.forward_log(
self.startprob_, self.transmat_, log_frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:864: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
__________________ TestCategoricalHMM.test_fit_with_init[log] __________________
self = <hmmlearn.tests.test_categorical_hmm.TestCategoricalHMM object at 0xffff94f50a10>
implementation = 'log', params = 'ste', n_iter = 5
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_fit_with_init(self, implementation, params='ste', n_iter=5):
lengths = [10] * 10
h = self.new_hmm(implementation)
X, _state_sequence = h.sample(sum(lengths))
# use init_function to initialize paramerters
h = hmm.CategoricalHMM(self.n_components, params=params,
init_params=params)
h._init(X, lengths)
> assert_log_likelihood_increasing(h, X, lengths, n_iter)
hmmlearn/tests/test_categorical_hmm.py:157:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/tests/__init__.py:46: in assert_log_likelihood_increasing
h.fit(X, lengths=lengths)
hmmlearn/_emissions.py:27: in <lambda>
return functools.wraps(func)(lambda *args, **kwargs: func(*args, **kwargs))
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = CategoricalHMM(init_params='', n_components=2, n_features=3, n_iter=1,
random_state=RandomState(MT19937) at 0xFFFF9B17F840)
X = array([[2],
[1],
[2],
[1],
[1],
[2],
[1],
[1],
[2],
[2]])
def _fit_log(self, X):
log_frameprob = self._compute_log_likelihood(X)
> log_prob, fwdlattice = _hmmc.forward_log(
self.startprob_, self.transmat_, log_frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:864: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
__ TestGaussianHMMWithSphericalCovars.test_score_samples_and_decode[scaling] ___
self = <hmmlearn.tests.test_gaussian_hmm.TestGaussianHMMWithSphericalCovars object at 0xffff954a7450>
implementation = 'scaling'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_score_samples_and_decode(self, implementation):
h = hmm.GaussianHMM(self.n_components, self.covariance_type,
init_params="st", implementation=implementation)
h.means_ = self.means
h.covars_ = self.covars
# Make sure the means are far apart so posteriors.argmax()
# picks the actual component used to generate the observations.
h.means_ = 20 * h.means_
gaussidx = np.repeat(np.arange(self.n_components), 5)
n_samples = len(gaussidx)
X = (self.prng.randn(n_samples, self.n_features)
+ h.means_[gaussidx])
h._init(X, [n_samples])
> ll, posteriors = h.score_samples(X)
hmmlearn/tests/test_gaussian_hmm.py:52:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:201: in score_samples
return self._score(X, lengths, compute_posteriors=True)
hmmlearn/base.py:244: in _score
return impl(
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GaussianHMM(covariance_type='spherical', implementation='scaling',
init_params='st', n_components=3)
X = array([[-179.56000798, 79.57176561, 259.68798732],
[-180.56888339, 78.41505899, 261.05535316],
[-1...6363279 ],
[-140.61081384, -301.3193914 , -140.56172842],
[-139.79461543, -300.95336068, -139.67848205]])
lengths = None
def _score_scaling(self, X, lengths=None, *, compute_posteriors):
log_prob = 0
sub_posteriors = [np.empty((0, self.n_components))]
for sub_X in _utils.split_X_lengths(X, lengths):
frameprob = self._compute_likelihood(sub_X)
> log_probij, fwdlattice, scaling_factors = _hmmc.forward_scaling(
self.startprob_, self.transmat_, frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:272: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
____ TestGaussianHMMWithSphericalCovars.test_score_samples_and_decode[log] _____
self = <hmmlearn.tests.test_gaussian_hmm.TestGaussianHMMWithSphericalCovars object at 0xffff94e99400>
implementation = 'log'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_score_samples_and_decode(self, implementation):
h = hmm.GaussianHMM(self.n_components, self.covariance_type,
init_params="st", implementation=implementation)
h.means_ = self.means
h.covars_ = self.covars
# Make sure the means are far apart so posteriors.argmax()
# picks the actual component used to generate the observations.
h.means_ = 20 * h.means_
gaussidx = np.repeat(np.arange(self.n_components), 5)
n_samples = len(gaussidx)
X = (self.prng.randn(n_samples, self.n_features)
+ h.means_[gaussidx])
h._init(X, [n_samples])
> ll, posteriors = h.score_samples(X)
hmmlearn/tests/test_gaussian_hmm.py:52:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:201: in score_samples
return self._score(X, lengths, compute_posteriors=True)
hmmlearn/base.py:244: in _score
return impl(
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GaussianHMM(covariance_type='spherical', init_params='st', n_components=3)
X = array([[-179.56000798, 79.57176561, 259.68798732],
[-180.56888339, 78.41505899, 261.05535316],
[-1...6363279 ],
[-140.61081384, -301.3193914 , -140.56172842],
[-139.79461543, -300.95336068, -139.67848205]])
lengths = None
def _score_log(self, X, lengths=None, *, compute_posteriors):
"""
Compute the log probability under the model, as well as posteriors if
*compute_posteriors* is True (otherwise, an empty array is returned
for the latter).
"""
log_prob = 0
sub_posteriors = [np.empty((0, self.n_components))]
for sub_X in _utils.split_X_lengths(X, lengths):
log_frameprob = self._compute_log_likelihood(sub_X)
> log_probij, fwdlattice = _hmmc.forward_log(
self.startprob_, self.transmat_, log_frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:257: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
_____________ TestGaussianHMMWithSphericalCovars.test_fit[scaling] _____________
self = <hmmlearn.tests.test_gaussian_hmm.TestGaussianHMMWithSphericalCovars object at 0xffff94f74910>
implementation = 'scaling', params = 'stmc', n_iter = 5, kwargs = {}
h = GaussianHMM(covariance_type='spherical', implementation='scaling',
n_components=3)
lengths = [10, 10, 10, 10, 10, 10, ...]
X = array([[-180.22405794, 80.14919183, 259.7276458 ],
[-179.90124348, 79.8556723 , 259.97708883],
[-2...95007275],
[-139.97005487, -299.93792764, -140.04085163],
[-239.95188158, 320.03972951, -119.97272471]])
_state_sequence = array([0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 2, 2, 1, 0, 0, 2, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 2, 2, 1, 1, 1,...2,
2, 1, 1, 1, 1, 1, 1, 1, 1, 1, 2, 1, 1, 1, 1, 1, 0, 0, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 2, 1])
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_fit(self, implementation, params='stmc', n_iter=5, **kwargs):
h = hmm.GaussianHMM(self.n_components, self.covariance_type,
implementation=implementation)
h.startprob_ = self.startprob
h.transmat_ = normalized(
self.transmat + np.diag(self.prng.rand(self.n_components)), 1)
h.means_ = 20 * self.means
h.covars_ = self.covars
lengths = [10] * 10
X, _state_sequence = h.sample(sum(lengths), random_state=self.prng)
# Mess up the parameters and see if we can re-learn them.
# TODO: change the params and uncomment the check
> h.fit(X, lengths=lengths)
hmmlearn/tests/test_gaussian_hmm.py:89:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GaussianHMM(covariance_type='spherical', implementation='scaling',
n_components=3)
X = array([[-180.22405794, 80.14919183, 259.7276458 ],
[-179.90124348, 79.8556723 , 259.97708883],
[-2...02423171],
[-240.11318548, 319.89135278, -120.23468395],
[-240.09991625, 319.74125997, -119.91965919]])
def _fit_scaling(self, X):
frameprob = self._compute_likelihood(X)
> log_prob, fwdlattice, scaling_factors = _hmmc.forward_scaling(
self.startprob_, self.transmat_, frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:855: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
------------------------------ Captured log call -------------------------------
WARNING hmmlearn.base:base.py:514 Even though the 'startprob_' attribute is set, it will be overwritten during initialization because 'init_params' contains 's'
WARNING hmmlearn.base:base.py:514 Even though the 'transmat_' attribute is set, it will be overwritten during initialization because 'init_params' contains 't'
WARNING hmmlearn.base:base.py:514 Even though the 'means_' attribute is set, it will be overwritten during initialization because 'init_params' contains 'm'
WARNING hmmlearn.base:base.py:514 Even though the 'covars_' attribute is set, it will be overwritten during initialization because 'init_params' contains 'c'
_______________ TestGaussianHMMWithSphericalCovars.test_fit[log] _______________
self = <hmmlearn.tests.test_gaussian_hmm.TestGaussianHMMWithSphericalCovars object at 0xffff94f50870>
implementation = 'log', params = 'stmc', n_iter = 5, kwargs = {}
h = GaussianHMM(covariance_type='spherical', n_components=3)
lengths = [10, 10, 10, 10, 10, 10, ...]
X = array([[-180.22405794, 80.14919183, 259.7276458 ],
[-179.90124348, 79.8556723 , 259.97708883],
[-2...95007275],
[-139.97005487, -299.93792764, -140.04085163],
[-239.95188158, 320.03972951, -119.97272471]])
_state_sequence = array([0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 2, 2, 1, 0, 0, 2, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 2, 2, 1, 1, 1,...2,
2, 1, 1, 1, 1, 1, 1, 1, 1, 1, 2, 1, 1, 1, 1, 1, 0, 0, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 2, 1])
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_fit(self, implementation, params='stmc', n_iter=5, **kwargs):
h = hmm.GaussianHMM(self.n_components, self.covariance_type,
implementation=implementation)
h.startprob_ = self.startprob
h.transmat_ = normalized(
self.transmat + np.diag(self.prng.rand(self.n_components)), 1)
h.means_ = 20 * self.means
h.covars_ = self.covars
lengths = [10] * 10
X, _state_sequence = h.sample(sum(lengths), random_state=self.prng)
# Mess up the parameters and see if we can re-learn them.
# TODO: change the params and uncomment the check
> h.fit(X, lengths=lengths)
hmmlearn/tests/test_gaussian_hmm.py:89:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GaussianHMM(covariance_type='spherical', n_components=3)
X = array([[-180.22405794, 80.14919183, 259.7276458 ],
[-179.90124348, 79.8556723 , 259.97708883],
[-2...02423171],
[-240.11318548, 319.89135278, -120.23468395],
[-240.09991625, 319.74125997, -119.91965919]])
def _fit_log(self, X):
log_frameprob = self._compute_log_likelihood(X)
> log_prob, fwdlattice = _hmmc.forward_log(
self.startprob_, self.transmat_, log_frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:864: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
------------------------------ Captured log call -------------------------------
WARNING hmmlearn.base:base.py:514 Even though the 'startprob_' attribute is set, it will be overwritten during initialization because 'init_params' contains 's'
WARNING hmmlearn.base:base.py:514 Even though the 'transmat_' attribute is set, it will be overwritten during initialization because 'init_params' contains 't'
WARNING hmmlearn.base:base.py:514 Even though the 'means_' attribute is set, it will be overwritten during initialization because 'init_params' contains 'm'
WARNING hmmlearn.base:base.py:514 Even though the 'covars_' attribute is set, it will be overwritten during initialization because 'init_params' contains 'c'
__________ TestGaussianHMMWithSphericalCovars.test_criterion[scaling] __________
self = <hmmlearn.tests.test_gaussian_hmm.TestGaussianHMMWithSphericalCovars object at 0xffff94b14a10>
implementation = 'scaling'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_criterion(self, implementation):
random_state = check_random_state(42)
m1 = hmm.GaussianHMM(self.n_components, init_params="",
covariance_type=self.covariance_type)
m1.startprob_ = self.startprob
m1.transmat_ = self.transmat
m1.means_ = self.means * 10
m1.covars_ = self.covars
X, _ = m1.sample(2000, random_state=random_state)
aic = []
bic = []
ns = [2, 3, 4]
for n in ns:
h = hmm.GaussianHMM(n, self.covariance_type, n_iter=500,
random_state=random_state, implementation=implementation)
> h.fit(X)
hmmlearn/tests/test_gaussian_hmm.py:110:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GaussianHMM(covariance_type='spherical', implementation='scaling',
n_components=2, n_iter=500,
random_state=RandomState(MT19937) at 0xFFFF94AA6440)
X = array([[ -90.15718286, 40.04508216, 130.03944716],
[-119.82025674, 159.91649324, -59.90349328],
[-1...84045482],
[-120.12894077, 159.84070667, -60.20323671],
[ -89.97836609, 39.94933366, 129.82682576]])
def _fit_scaling(self, X):
frameprob = self._compute_likelihood(X)
> log_prob, fwdlattice, scaling_factors = _hmmc.forward_scaling(
self.startprob_, self.transmat_, frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:855: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
____________ TestGaussianHMMWithSphericalCovars.test_criterion[log] ____________
self = <hmmlearn.tests.test_gaussian_hmm.TestGaussianHMMWithSphericalCovars object at 0xffff94b14ad0>
implementation = 'log'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_criterion(self, implementation):
random_state = check_random_state(42)
m1 = hmm.GaussianHMM(self.n_components, init_params="",
covariance_type=self.covariance_type)
m1.startprob_ = self.startprob
m1.transmat_ = self.transmat
m1.means_ = self.means * 10
m1.covars_ = self.covars
X, _ = m1.sample(2000, random_state=random_state)
aic = []
bic = []
ns = [2, 3, 4]
for n in ns:
h = hmm.GaussianHMM(n, self.covariance_type, n_iter=500,
random_state=random_state, implementation=implementation)
> h.fit(X)
hmmlearn/tests/test_gaussian_hmm.py:110:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GaussianHMM(covariance_type='spherical', n_components=2, n_iter=500,
random_state=RandomState(MT19937) at 0xFFFF9488C240)
X = array([[ -90.15718286, 40.04508216, 130.03944716],
[-119.82025674, 159.91649324, -59.90349328],
[-1...84045482],
[-120.12894077, 159.84070667, -60.20323671],
[ -89.97836609, 39.94933366, 129.82682576]])
def _fit_log(self, X):
log_frameprob = self._compute_log_likelihood(X)
> log_prob, fwdlattice = _hmmc.forward_log(
self.startprob_, self.transmat_, log_frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:864: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
___ TestGaussianHMMWithSphericalCovars.test_fit_ignored_init_warns[scaling] ____
self = <hmmlearn.tests.test_gaussian_hmm.TestGaussianHMMWithSphericalCovars object at 0xffff94b1c7e0>
implementation = 'scaling'
caplog = <_pytest.logging.LogCaptureFixture object at 0xffff94eff230>
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_fit_ignored_init_warns(self, implementation, caplog):
# This test occasionally will be flaky in learning the model.
# What is important here, is that the expected log message is produced
# We can test convergence properties elsewhere.
h = hmm.GaussianHMM(self.n_components, self.covariance_type,
implementation=implementation)
h.startprob_ = self.startprob
> h.fit(self.prng.randn(100, self.n_components))
hmmlearn/tests/test_gaussian_hmm.py:128:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GaussianHMM(covariance_type='spherical', implementation='scaling',
n_components=3)
X = array([[ 4.39992016e-01, -4.28234395e-01, -3.12012681e-01],
[-5.68883385e-01, -1.58494101e+00, 1.05535316e+00]... [-2.20862064e-01, 4.83062914e-01, -1.95718567e+00],
[ 1.00961906e+00, 7.02226595e-01, -9.47509422e-01]])
def _fit_scaling(self, X):
frameprob = self._compute_likelihood(X)
> log_prob, fwdlattice, scaling_factors = _hmmc.forward_scaling(
self.startprob_, self.transmat_, frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:855: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
------------------------------ Captured log call -------------------------------
WARNING hmmlearn.base:base.py:514 Even though the 'startprob_' attribute is set, it will be overwritten during initialization because 'init_params' contains 's'
_____ TestGaussianHMMWithSphericalCovars.test_fit_ignored_init_warns[log] ______
self = <hmmlearn.tests.test_gaussian_hmm.TestGaussianHMMWithSphericalCovars object at 0xffff94b1c890>
implementation = 'log'
caplog = <_pytest.logging.LogCaptureFixture object at 0xffff94a19a90>
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_fit_ignored_init_warns(self, implementation, caplog):
# This test occasionally will be flaky in learning the model.
# What is important here, is that the expected log message is produced
# We can test convergence properties elsewhere.
h = hmm.GaussianHMM(self.n_components, self.covariance_type,
implementation=implementation)
h.startprob_ = self.startprob
> h.fit(self.prng.randn(100, self.n_components))
hmmlearn/tests/test_gaussian_hmm.py:128:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GaussianHMM(covariance_type='spherical', n_components=3)
X = array([[ 4.39992016e-01, -4.28234395e-01, -3.12012681e-01],
[-5.68883385e-01, -1.58494101e+00, 1.05535316e+00]... [-2.20862064e-01, 4.83062914e-01, -1.95718567e+00],
[ 1.00961906e+00, 7.02226595e-01, -9.47509422e-01]])
def _fit_log(self, X):
log_frameprob = self._compute_log_likelihood(X)
> log_prob, fwdlattice = _hmmc.forward_log(
self.startprob_, self.transmat_, log_frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:864: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
------------------------------ Captured log call -------------------------------
WARNING hmmlearn.base:base.py:514 Even though the 'startprob_' attribute is set, it will be overwritten during initialization because 'init_params' contains 's'
_ TestGaussianHMMWithSphericalCovars.test_fit_sequences_of_different_length[scaling] _
self = <hmmlearn.tests.test_gaussian_hmm.TestGaussianHMMWithSphericalCovars object at 0xffff94ec23c0>
implementation = 'scaling'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_fit_sequences_of_different_length(self, implementation):
lengths = [3, 4, 5]
X = self.prng.rand(sum(lengths), self.n_features)
h = hmm.GaussianHMM(self.n_components, self.covariance_type,
implementation=implementation)
# This shouldn't raise
# ValueError: setting an array element with a sequence.
> h.fit(X, lengths=lengths)
hmmlearn/tests/test_gaussian_hmm.py:157:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GaussianHMM(covariance_type='spherical', implementation='scaling',
n_components=3)
X = array([[0.18263144, 0.82608225, 0.10540183],
[0.28357668, 0.06556327, 0.05644419],
[0.76545582, 0.01178803, 0.61194334]])
def _fit_scaling(self, X):
frameprob = self._compute_likelihood(X)
> log_prob, fwdlattice, scaling_factors = _hmmc.forward_scaling(
self.startprob_, self.transmat_, frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:855: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
_ TestGaussianHMMWithSphericalCovars.test_fit_sequences_of_different_length[log] _
self = <hmmlearn.tests.test_gaussian_hmm.TestGaussianHMMWithSphericalCovars object at 0xffff94ab0e50>
implementation = 'log'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_fit_sequences_of_different_length(self, implementation):
lengths = [3, 4, 5]
X = self.prng.rand(sum(lengths), self.n_features)
h = hmm.GaussianHMM(self.n_components, self.covariance_type,
implementation=implementation)
# This shouldn't raise
# ValueError: setting an array element with a sequence.
> h.fit(X, lengths=lengths)
hmmlearn/tests/test_gaussian_hmm.py:157:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GaussianHMM(covariance_type='spherical', n_components=3)
X = array([[0.18263144, 0.82608225, 0.10540183],
[0.28357668, 0.06556327, 0.05644419],
[0.76545582, 0.01178803, 0.61194334]])
def _fit_log(self, X):
log_frameprob = self._compute_log_likelihood(X)
> log_prob, fwdlattice = _hmmc.forward_log(
self.startprob_, self.transmat_, log_frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:864: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
_ TestGaussianHMMWithSphericalCovars.test_fit_with_length_one_signal[scaling] __
self = <hmmlearn.tests.test_gaussian_hmm.TestGaussianHMMWithSphericalCovars object at 0xffff94ab30d0>
implementation = 'scaling'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_fit_with_length_one_signal(self, implementation):
lengths = [10, 8, 1]
X = self.prng.rand(sum(lengths), self.n_features)
h = hmm.GaussianHMM(self.n_components, self.covariance_type,
implementation=implementation)
# This shouldn't raise
# ValueError: zero-size array to reduction operation maximum which
# has no identity
> h.fit(X, lengths=lengths)
hmmlearn/tests/test_gaussian_hmm.py:169:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GaussianHMM(covariance_type='spherical', implementation='scaling',
n_components=3)
X = array([[0.18263144, 0.82608225, 0.10540183],
[0.28357668, 0.06556327, 0.05644419],
[0.76545582, 0.011788...06, 0.59758229, 0.87239246],
[0.98302087, 0.46740328, 0.87574449],
[0.2960687 , 0.13129105, 0.84281793]])
def _fit_scaling(self, X):
frameprob = self._compute_likelihood(X)
> log_prob, fwdlattice, scaling_factors = _hmmc.forward_scaling(
self.startprob_, self.transmat_, frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:855: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
___ TestGaussianHMMWithSphericalCovars.test_fit_with_length_one_signal[log] ____
self = <hmmlearn.tests.test_gaussian_hmm.TestGaussianHMMWithSphericalCovars object at 0xffff94f79630>
implementation = 'log'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_fit_with_length_one_signal(self, implementation):
lengths = [10, 8, 1]
X = self.prng.rand(sum(lengths), self.n_features)
h = hmm.GaussianHMM(self.n_components, self.covariance_type,
implementation=implementation)
# This shouldn't raise
# ValueError: zero-size array to reduction operation maximum which
# has no identity
> h.fit(X, lengths=lengths)
hmmlearn/tests/test_gaussian_hmm.py:169:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GaussianHMM(covariance_type='spherical', n_components=3)
X = array([[0.18263144, 0.82608225, 0.10540183],
[0.28357668, 0.06556327, 0.05644419],
[0.76545582, 0.011788...06, 0.59758229, 0.87239246],
[0.98302087, 0.46740328, 0.87574449],
[0.2960687 , 0.13129105, 0.84281793]])
def _fit_log(self, X):
log_frameprob = self._compute_log_likelihood(X)
> log_prob, fwdlattice = _hmmc.forward_log(
self.startprob_, self.transmat_, log_frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:864: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
______ TestGaussianHMMWithSphericalCovars.test_fit_zero_variance[scaling] ______
self = <hmmlearn.tests.test_gaussian_hmm.TestGaussianHMMWithSphericalCovars object at 0xffff94f79940>
implementation = 'scaling'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_fit_zero_variance(self, implementation):
# Example from issue #2 on GitHub.
X = np.asarray([
[7.15000000e+02, 5.85000000e+02, 0.00000000e+00, 0.00000000e+00],
[7.15000000e+02, 5.20000000e+02, 1.04705811e+00, -6.03696289e+01],
[7.15000000e+02, 4.55000000e+02, 7.20886230e-01, -5.27055664e+01],
[7.15000000e+02, 3.90000000e+02, -4.57946777e-01, -7.80605469e+01],
[7.15000000e+02, 3.25000000e+02, -6.43127441e+00, -5.59954834e+01],
[7.15000000e+02, 2.60000000e+02, -2.90063477e+00, -7.80220947e+01],
[7.15000000e+02, 1.95000000e+02, 8.45532227e+00, -7.03294373e+01],
[7.15000000e+02, 1.30000000e+02, 4.09387207e+00, -5.83621216e+01],
[7.15000000e+02, 6.50000000e+01, -1.21667480e+00, -4.48131409e+01]
])
h = hmm.GaussianHMM(3, self.covariance_type,
implementation=implementation)
> h.fit(X)
hmmlearn/tests/test_gaussian_hmm.py:187:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GaussianHMM(covariance_type='spherical', implementation='scaling',
n_components=3)
X = array([[ 7.15000000e+02, 5.85000000e+02, 0.00000000e+00,
0.00000000e+00],
[ 7.15000000e+02, 5.20000...07e+00,
-5.83621216e+01],
[ 7.15000000e+02, 6.50000000e+01, -1.21667480e+00,
-4.48131409e+01]])
def _fit_scaling(self, X):
frameprob = self._compute_likelihood(X)
> log_prob, fwdlattice, scaling_factors = _hmmc.forward_scaling(
self.startprob_, self.transmat_, frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:855: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
________ TestGaussianHMMWithSphericalCovars.test_fit_zero_variance[log] ________
self = <hmmlearn.tests.test_gaussian_hmm.TestGaussianHMMWithSphericalCovars object at 0xffff94f44350>
implementation = 'log'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_fit_zero_variance(self, implementation):
# Example from issue #2 on GitHub.
X = np.asarray([
[7.15000000e+02, 5.85000000e+02, 0.00000000e+00, 0.00000000e+00],
[7.15000000e+02, 5.20000000e+02, 1.04705811e+00, -6.03696289e+01],
[7.15000000e+02, 4.55000000e+02, 7.20886230e-01, -5.27055664e+01],
[7.15000000e+02, 3.90000000e+02, -4.57946777e-01, -7.80605469e+01],
[7.15000000e+02, 3.25000000e+02, -6.43127441e+00, -5.59954834e+01],
[7.15000000e+02, 2.60000000e+02, -2.90063477e+00, -7.80220947e+01],
[7.15000000e+02, 1.95000000e+02, 8.45532227e+00, -7.03294373e+01],
[7.15000000e+02, 1.30000000e+02, 4.09387207e+00, -5.83621216e+01],
[7.15000000e+02, 6.50000000e+01, -1.21667480e+00, -4.48131409e+01]
])
h = hmm.GaussianHMM(3, self.covariance_type,
implementation=implementation)
> h.fit(X)
hmmlearn/tests/test_gaussian_hmm.py:187:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GaussianHMM(covariance_type='spherical', n_components=3)
X = array([[ 7.15000000e+02, 5.85000000e+02, 0.00000000e+00,
0.00000000e+00],
[ 7.15000000e+02, 5.20000...07e+00,
-5.83621216e+01],
[ 7.15000000e+02, 6.50000000e+01, -1.21667480e+00,
-4.48131409e+01]])
def _fit_log(self, X):
log_frameprob = self._compute_log_likelihood(X)
> log_prob, fwdlattice = _hmmc.forward_log(
self.startprob_, self.transmat_, log_frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:864: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
_______ TestGaussianHMMWithSphericalCovars.test_fit_with_priors[scaling] _______
self = <hmmlearn.tests.test_gaussian_hmm.TestGaussianHMMWithSphericalCovars object at 0xffff94f441d0>
implementation = 'scaling', init_params = 'mc', params = 'stmc', n_iter = 20
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_fit_with_priors(self, implementation, init_params='mc',
params='stmc', n_iter=20):
# We have a few options to make this a robust test, such as
# a. increase the amount of training data to ensure convergence
# b. Only learn some of the parameters (simplify the problem)
# c. Increase the number of iterations
#
# (c) seems to not affect the ci/cd time too much.
startprob_prior = 10 * self.startprob + 2.0
transmat_prior = 10 * self.transmat + 2.0
means_prior = self.means
means_weight = 2.0
covars_weight = 2.0
if self.covariance_type in ('full', 'tied'):
covars_weight += self.n_features
covars_prior = self.covars
h = hmm.GaussianHMM(self.n_components, self.covariance_type,
implementation=implementation)
h.startprob_ = self.startprob
h.startprob_prior = startprob_prior
h.transmat_ = normalized(
self.transmat + np.diag(self.prng.rand(self.n_components)), 1)
h.transmat_prior = transmat_prior
h.means_ = 20 * self.means
h.means_prior = means_prior
h.means_weight = means_weight
h.covars_ = self.covars
h.covars_prior = covars_prior
h.covars_weight = covars_weight
lengths = [200] * 10
X, _state_sequence = h.sample(sum(lengths), random_state=self.prng)
# Re-initialize the parameters and check that we can converge to
# the original parameter values.
h_learn = hmm.GaussianHMM(self.n_components, self.covariance_type,
init_params=init_params, params=params,
implementation=implementation,)
# don't use random parameters for testing
init = 1. / h_learn.n_components
h_learn.startprob_ = np.full(h_learn.n_components, init)
h_learn.transmat_ = \
np.full((h_learn.n_components, h_learn.n_components), init)
h_learn.n_iter = 0
h_learn.fit(X, lengths=lengths)
> assert_log_likelihood_increasing(h_learn, X, lengths, n_iter)
hmmlearn/tests/test_gaussian_hmm.py:237:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/tests/__init__.py:46: in assert_log_likelihood_increasing
h.fit(X, lengths=lengths)
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GaussianHMM(covariance_type='spherical', implementation='scaling',
init_params='', n_components=3, n_iter=1)
X = array([[-180.22405794, 80.14919183, 259.7276458 ],
[-179.90124348, 79.8556723 , 259.97708883],
[-2...73790553],
[-180.18615346, 79.87077255, 259.73353861],
[-240.06028298, 320.09425446, -119.74998577]])
def _fit_scaling(self, X):
frameprob = self._compute_likelihood(X)
> log_prob, fwdlattice, scaling_factors = _hmmc.forward_scaling(
self.startprob_, self.transmat_, frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:855: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
_________ TestGaussianHMMWithSphericalCovars.test_fit_with_priors[log] _________
self = <hmmlearn.tests.test_gaussian_hmm.TestGaussianHMMWithSphericalCovars object at 0xffff96581e00>
implementation = 'log', init_params = 'mc', params = 'stmc', n_iter = 20
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_fit_with_priors(self, implementation, init_params='mc',
params='stmc', n_iter=20):
# We have a few options to make this a robust test, such as
# a. increase the amount of training data to ensure convergence
# b. Only learn some of the parameters (simplify the problem)
# c. Increase the number of iterations
#
# (c) seems to not affect the ci/cd time too much.
startprob_prior = 10 * self.startprob + 2.0
transmat_prior = 10 * self.transmat + 2.0
means_prior = self.means
means_weight = 2.0
covars_weight = 2.0
if self.covariance_type in ('full', 'tied'):
covars_weight += self.n_features
covars_prior = self.covars
h = hmm.GaussianHMM(self.n_components, self.covariance_type,
implementation=implementation)
h.startprob_ = self.startprob
h.startprob_prior = startprob_prior
h.transmat_ = normalized(
self.transmat + np.diag(self.prng.rand(self.n_components)), 1)
h.transmat_prior = transmat_prior
h.means_ = 20 * self.means
h.means_prior = means_prior
h.means_weight = means_weight
h.covars_ = self.covars
h.covars_prior = covars_prior
h.covars_weight = covars_weight
lengths = [200] * 10
X, _state_sequence = h.sample(sum(lengths), random_state=self.prng)
# Re-initialize the parameters and check that we can converge to
# the original parameter values.
h_learn = hmm.GaussianHMM(self.n_components, self.covariance_type,
init_params=init_params, params=params,
implementation=implementation,)
# don't use random parameters for testing
init = 1. / h_learn.n_components
h_learn.startprob_ = np.full(h_learn.n_components, init)
h_learn.transmat_ = \
np.full((h_learn.n_components, h_learn.n_components), init)
h_learn.n_iter = 0
h_learn.fit(X, lengths=lengths)
> assert_log_likelihood_increasing(h_learn, X, lengths, n_iter)
hmmlearn/tests/test_gaussian_hmm.py:237:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/tests/__init__.py:46: in assert_log_likelihood_increasing
h.fit(X, lengths=lengths)
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GaussianHMM(covariance_type='spherical', init_params='', n_components=3,
n_iter=1)
X = array([[-180.22405794, 80.14919183, 259.7276458 ],
[-179.90124348, 79.8556723 , 259.97708883],
[-2...73790553],
[-180.18615346, 79.87077255, 259.73353861],
[-240.06028298, 320.09425446, -119.74998577]])
def _fit_log(self, X):
log_frameprob = self._compute_log_likelihood(X)
> log_prob, fwdlattice = _hmmc.forward_log(
self.startprob_, self.transmat_, log_frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:864: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
_ TestGaussianHMMWithSphericalCovars.test_fit_startprob_and_transmat[scaling] __
self = <hmmlearn.tests.test_gaussian_hmm.TestGaussianHMMWithSphericalCovars object at 0xffff94f54b00>
implementation = 'scaling'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_fit_startprob_and_transmat(self, implementation):
> self.test_fit(implementation, 'st')
hmmlearn/tests/test_gaussian_hmm.py:274:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/tests/test_gaussian_hmm.py:89: in test_fit
h.fit(X, lengths=lengths)
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GaussianHMM(covariance_type='spherical', implementation='scaling',
n_components=3)
X = array([[-180.22405794, 80.14919183, 259.7276458 ],
[-179.90124348, 79.8556723 , 259.97708883],
[-2...02423171],
[-240.11318548, 319.89135278, -120.23468395],
[-240.09991625, 319.74125997, -119.91965919]])
def _fit_scaling(self, X):
frameprob = self._compute_likelihood(X)
> log_prob, fwdlattice, scaling_factors = _hmmc.forward_scaling(
self.startprob_, self.transmat_, frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:855: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
------------------------------ Captured log call -------------------------------
WARNING hmmlearn.base:base.py:514 Even though the 'startprob_' attribute is set, it will be overwritten during initialization because 'init_params' contains 's'
WARNING hmmlearn.base:base.py:514 Even though the 'transmat_' attribute is set, it will be overwritten during initialization because 'init_params' contains 't'
WARNING hmmlearn.base:base.py:514 Even though the 'means_' attribute is set, it will be overwritten during initialization because 'init_params' contains 'm'
WARNING hmmlearn.base:base.py:514 Even though the 'covars_' attribute is set, it will be overwritten during initialization because 'init_params' contains 'c'
___ TestGaussianHMMWithSphericalCovars.test_fit_startprob_and_transmat[log] ____
self = <hmmlearn.tests.test_gaussian_hmm.TestGaussianHMMWithSphericalCovars object at 0xffff94f549d0>
implementation = 'log'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_fit_startprob_and_transmat(self, implementation):
> self.test_fit(implementation, 'st')
hmmlearn/tests/test_gaussian_hmm.py:274:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/tests/test_gaussian_hmm.py:89: in test_fit
h.fit(X, lengths=lengths)
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GaussianHMM(covariance_type='spherical', n_components=3)
X = array([[-180.22405794, 80.14919183, 259.7276458 ],
[-179.90124348, 79.8556723 , 259.97708883],
[-2...02423171],
[-240.11318548, 319.89135278, -120.23468395],
[-240.09991625, 319.74125997, -119.91965919]])
def _fit_log(self, X):
log_frameprob = self._compute_log_likelihood(X)
> log_prob, fwdlattice = _hmmc.forward_log(
self.startprob_, self.transmat_, log_frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:864: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
------------------------------ Captured log call -------------------------------
WARNING hmmlearn.base:base.py:514 Even though the 'startprob_' attribute is set, it will be overwritten during initialization because 'init_params' contains 's'
WARNING hmmlearn.base:base.py:514 Even though the 'transmat_' attribute is set, it will be overwritten during initialization because 'init_params' contains 't'
WARNING hmmlearn.base:base.py:514 Even though the 'means_' attribute is set, it will be overwritten during initialization because 'init_params' contains 'm'
WARNING hmmlearn.base:base.py:514 Even though the 'covars_' attribute is set, it will be overwritten during initialization because 'init_params' contains 'c'
_____ TestGaussianHMMWithSphericalCovars.test_underflow_from_scaling[log] ______
self = <hmmlearn.tests.test_gaussian_hmm.TestGaussianHMMWithSphericalCovars object at 0xffff94f01150>
implementation = 'log'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_underflow_from_scaling(self, implementation):
# Setup an ill-conditioned dataset
data1 = self.prng.normal(0, 1, 100).tolist()
data2 = self.prng.normal(5, 1, 100).tolist()
data3 = self.prng.normal(0, 1, 100).tolist()
data4 = self.prng.normal(5, 1, 100).tolist()
data = np.concatenate([data1, data2, data3, data4])
# Insert an outlier
data[40] = 10000
data2d = data[:, None]
lengths = [len(data2d)]
h = hmm.GaussianHMM(2, n_iter=100, verbose=True,
covariance_type=self.covariance_type,
implementation=implementation, init_params="")
h.startprob_ = [0.0, 1]
h.transmat_ = [[0.4, 0.6], [0.6, 0.4]]
h.means_ = [[0], [5]]
h.covars_ = [[1], [1]]
if implementation == "scaling":
with pytest.raises(ValueError):
h.fit(data2d, lengths)
else:
> h.fit(data2d, lengths)
hmmlearn/tests/test_gaussian_hmm.py:300:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GaussianHMM(covariance_type='spherical', init_params='', n_components=2,
n_iter=100, verbose=True)
X = array([[ 4.39992016e-01],
[-4.28234395e-01],
[-3.12012681e-01],
[-5.68883385e-01],
[-1.584...83917623e+00],
[ 5.48982119e+00],
[ 7.23344018e+00],
[ 4.20497381e+00],
[ 4.96426274e+00]])
def _fit_log(self, X):
log_frameprob = self._compute_log_likelihood(X)
> log_prob, fwdlattice = _hmmc.forward_log(
self.startprob_, self.transmat_, log_frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:864: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
___ TestGaussianHMMWithDiagonalCovars.test_score_samples_and_decode[scaling] ___
self = <hmmlearn.tests.test_gaussian_hmm.TestGaussianHMMWithDiagonalCovars object at 0xffff94f00d10>
implementation = 'scaling'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_score_samples_and_decode(self, implementation):
h = hmm.GaussianHMM(self.n_components, self.covariance_type,
init_params="st", implementation=implementation)
h.means_ = self.means
h.covars_ = self.covars
# Make sure the means are far apart so posteriors.argmax()
# picks the actual component used to generate the observations.
h.means_ = 20 * h.means_
gaussidx = np.repeat(np.arange(self.n_components), 5)
n_samples = len(gaussidx)
X = (self.prng.randn(n_samples, self.n_features)
+ h.means_[gaussidx])
h._init(X, [n_samples])
> ll, posteriors = h.score_samples(X)
hmmlearn/tests/test_gaussian_hmm.py:52:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:201: in score_samples
return self._score(X, lengths, compute_posteriors=True)
hmmlearn/base.py:244: in _score
return impl(
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GaussianHMM(implementation='scaling', init_params='st', n_components=3)
X = array([[-181.58494101, 81.05535316, 258.07342089],
[-179.30141612, 79.25379857, 259.84337334],
[-1...79461543],
[-140.95336068, -299.67848205, -141.52093867],
[-142.16145292, -299.65468671, -139.12103062]])
lengths = None
def _score_scaling(self, X, lengths=None, *, compute_posteriors):
log_prob = 0
sub_posteriors = [np.empty((0, self.n_components))]
for sub_X in _utils.split_X_lengths(X, lengths):
frameprob = self._compute_likelihood(sub_X)
> log_probij, fwdlattice, scaling_factors = _hmmc.forward_scaling(
self.startprob_, self.transmat_, frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:272: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
_____ TestGaussianHMMWithDiagonalCovars.test_score_samples_and_decode[log] _____
self = <hmmlearn.tests.test_gaussian_hmm.TestGaussianHMMWithDiagonalCovars object at 0xffff954a7550>
implementation = 'log'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_score_samples_and_decode(self, implementation):
h = hmm.GaussianHMM(self.n_components, self.covariance_type,
init_params="st", implementation=implementation)
h.means_ = self.means
h.covars_ = self.covars
# Make sure the means are far apart so posteriors.argmax()
# picks the actual component used to generate the observations.
h.means_ = 20 * h.means_
gaussidx = np.repeat(np.arange(self.n_components), 5)
n_samples = len(gaussidx)
X = (self.prng.randn(n_samples, self.n_features)
+ h.means_[gaussidx])
h._init(X, [n_samples])
> ll, posteriors = h.score_samples(X)
hmmlearn/tests/test_gaussian_hmm.py:52:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:201: in score_samples
return self._score(X, lengths, compute_posteriors=True)
hmmlearn/base.py:244: in _score
return impl(
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GaussianHMM(init_params='st', n_components=3)
X = array([[-181.58494101, 81.05535316, 258.07342089],
[-179.30141612, 79.25379857, 259.84337334],
[-1...79461543],
[-140.95336068, -299.67848205, -141.52093867],
[-142.16145292, -299.65468671, -139.12103062]])
lengths = None
def _score_log(self, X, lengths=None, *, compute_posteriors):
"""
Compute the log probability under the model, as well as posteriors if
*compute_posteriors* is True (otherwise, an empty array is returned
for the latter).
"""
log_prob = 0
sub_posteriors = [np.empty((0, self.n_components))]
for sub_X in _utils.split_X_lengths(X, lengths):
log_frameprob = self._compute_log_likelihood(sub_X)
> log_probij, fwdlattice = _hmmc.forward_log(
self.startprob_, self.transmat_, log_frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:257: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
_____________ TestGaussianHMMWithDiagonalCovars.test_fit[scaling] ______________
self = <hmmlearn.tests.test_gaussian_hmm.TestGaussianHMMWithDiagonalCovars object at 0xffff94e99b80>
implementation = 'scaling', params = 'stmc', n_iter = 5, kwargs = {}
h = GaussianHMM(implementation='scaling', n_components=3)
lengths = [10, 10, 10, 10, 10, 10, ...]
X = array([[-180.10548806, 79.65731382, 260.1106488 ],
[-240.1207402 , 319.97139868, -120.01791514],
[-2...02728733],
[-239.95365322, 320.03452379, -120.02851028],
[-179.97832262, 80.04811842, 260.03537787]])
_state_sequence = array([0, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 2, 2, 1, 0, 0, 2, 1, 1,
1, 0, 0, 0, 0, 0, 0, 0, 2, 2, 2, 2, 2,...2,
2, 1, 1, 1, 0, 0, 0, 1, 1, 2, 2, 1, 1, 1, 2, 1, 0, 0, 0, 1, 1, 0,
1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 1, 0])
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_fit(self, implementation, params='stmc', n_iter=5, **kwargs):
h = hmm.GaussianHMM(self.n_components, self.covariance_type,
implementation=implementation)
h.startprob_ = self.startprob
h.transmat_ = normalized(
self.transmat + np.diag(self.prng.rand(self.n_components)), 1)
h.means_ = 20 * self.means
h.covars_ = self.covars
lengths = [10] * 10
X, _state_sequence = h.sample(sum(lengths), random_state=self.prng)
# Mess up the parameters and see if we can re-learn them.
# TODO: change the params and uncomment the check
> h.fit(X, lengths=lengths)
hmmlearn/tests/test_gaussian_hmm.py:89:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GaussianHMM(implementation='scaling', n_components=3)
X = array([[-180.10548806, 79.65731382, 260.1106488 ],
[-240.1207402 , 319.97139868, -120.01791514],
[-2...98639428],
[-180.18651806, 79.88681452, 259.90325311],
[-179.93614812, 79.90008375, 259.76960024]])
def _fit_scaling(self, X):
frameprob = self._compute_likelihood(X)
> log_prob, fwdlattice, scaling_factors = _hmmc.forward_scaling(
self.startprob_, self.transmat_, frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:855: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
------------------------------ Captured log call -------------------------------
WARNING hmmlearn.base:base.py:514 Even though the 'startprob_' attribute is set, it will be overwritten during initialization because 'init_params' contains 's'
WARNING hmmlearn.base:base.py:514 Even though the 'transmat_' attribute is set, it will be overwritten during initialization because 'init_params' contains 't'
WARNING hmmlearn.base:base.py:514 Even though the 'means_' attribute is set, it will be overwritten during initialization because 'init_params' contains 'm'
WARNING hmmlearn.base:base.py:514 Even though the 'covars_' attribute is set, it will be overwritten during initialization because 'init_params' contains 'c'
_______________ TestGaussianHMMWithDiagonalCovars.test_fit[log] ________________
self = <hmmlearn.tests.test_gaussian_hmm.TestGaussianHMMWithDiagonalCovars object at 0xffff94f76b30>
implementation = 'log', params = 'stmc', n_iter = 5, kwargs = {}
h = GaussianHMM(n_components=3), lengths = [10, 10, 10, 10, 10, 10, ...]
X = array([[-180.10548806, 79.65731382, 260.1106488 ],
[-240.1207402 , 319.97139868, -120.01791514],
[-2...02728733],
[-239.95365322, 320.03452379, -120.02851028],
[-179.97832262, 80.04811842, 260.03537787]])
_state_sequence = array([0, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 2, 2, 1, 0, 0, 2, 1, 1,
1, 0, 0, 0, 0, 0, 0, 0, 2, 2, 2, 2, 2,...2,
2, 1, 1, 1, 0, 0, 0, 1, 1, 2, 2, 1, 1, 1, 2, 1, 0, 0, 0, 1, 1, 0,
1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 1, 0])
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_fit(self, implementation, params='stmc', n_iter=5, **kwargs):
h = hmm.GaussianHMM(self.n_components, self.covariance_type,
implementation=implementation)
h.startprob_ = self.startprob
h.transmat_ = normalized(
self.transmat + np.diag(self.prng.rand(self.n_components)), 1)
h.means_ = 20 * self.means
h.covars_ = self.covars
lengths = [10] * 10
X, _state_sequence = h.sample(sum(lengths), random_state=self.prng)
# Mess up the parameters and see if we can re-learn them.
# TODO: change the params and uncomment the check
> h.fit(X, lengths=lengths)
hmmlearn/tests/test_gaussian_hmm.py:89:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GaussianHMM(n_components=3)
X = array([[-180.10548806, 79.65731382, 260.1106488 ],
[-240.1207402 , 319.97139868, -120.01791514],
[-2...98639428],
[-180.18651806, 79.88681452, 259.90325311],
[-179.93614812, 79.90008375, 259.76960024]])
def _fit_log(self, X):
log_frameprob = self._compute_log_likelihood(X)
> log_prob, fwdlattice = _hmmc.forward_log(
self.startprob_, self.transmat_, log_frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:864: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
------------------------------ Captured log call -------------------------------
WARNING hmmlearn.base:base.py:514 Even though the 'startprob_' attribute is set, it will be overwritten during initialization because 'init_params' contains 's'
WARNING hmmlearn.base:base.py:514 Even though the 'transmat_' attribute is set, it will be overwritten during initialization because 'init_params' contains 't'
WARNING hmmlearn.base:base.py:514 Even though the 'means_' attribute is set, it will be overwritten during initialization because 'init_params' contains 'm'
WARNING hmmlearn.base:base.py:514 Even though the 'covars_' attribute is set, it will be overwritten during initialization because 'init_params' contains 'c'
__________ TestGaussianHMMWithDiagonalCovars.test_criterion[scaling] ___________
self = <hmmlearn.tests.test_gaussian_hmm.TestGaussianHMMWithDiagonalCovars object at 0xffff94f77070>
implementation = 'scaling'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_criterion(self, implementation):
random_state = check_random_state(42)
m1 = hmm.GaussianHMM(self.n_components, init_params="",
covariance_type=self.covariance_type)
m1.startprob_ = self.startprob
m1.transmat_ = self.transmat
m1.means_ = self.means * 10
m1.covars_ = self.covars
X, _ = m1.sample(2000, random_state=random_state)
aic = []
bic = []
ns = [2, 3, 4]
for n in ns:
h = hmm.GaussianHMM(n, self.covariance_type, n_iter=500,
random_state=random_state, implementation=implementation)
> h.fit(X)
hmmlearn/tests/test_gaussian_hmm.py:110:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GaussianHMM(implementation='scaling', n_components=2, n_iter=500,
random_state=RandomState(MT19937) at 0xFFFF9488E640)
X = array([[ -89.96055284, 39.80222668, 130.05051096],
[-120.05552152, 160.1845284 , -59.94002531],
[-1...75723798],
[-120.10591007, 159.86762656, -60.12630269],
[ -90.17317424, 40.02722058, 129.94323241]])
def _fit_scaling(self, X):
frameprob = self._compute_likelihood(X)
> log_prob, fwdlattice, scaling_factors = _hmmc.forward_scaling(
self.startprob_, self.transmat_, frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:855: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
____________ TestGaussianHMMWithDiagonalCovars.test_criterion[log] _____________
self = <hmmlearn.tests.test_gaussian_hmm.TestGaussianHMMWithDiagonalCovars object at 0xffff94f502c0>
implementation = 'log'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_criterion(self, implementation):
random_state = check_random_state(42)
m1 = hmm.GaussianHMM(self.n_components, init_params="",
covariance_type=self.covariance_type)
m1.startprob_ = self.startprob
m1.transmat_ = self.transmat
m1.means_ = self.means * 10
m1.covars_ = self.covars
X, _ = m1.sample(2000, random_state=random_state)
aic = []
bic = []
ns = [2, 3, 4]
for n in ns:
h = hmm.GaussianHMM(n, self.covariance_type, n_iter=500,
random_state=random_state, implementation=implementation)
> h.fit(X)
hmmlearn/tests/test_gaussian_hmm.py:110:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GaussianHMM(n_components=2, n_iter=500,
random_state=RandomState(MT19937) at 0xFFFF9488D840)
X = array([[ -89.96055284, 39.80222668, 130.05051096],
[-120.05552152, 160.1845284 , -59.94002531],
[-1...75723798],
[-120.10591007, 159.86762656, -60.12630269],
[ -90.17317424, 40.02722058, 129.94323241]])
def _fit_log(self, X):
log_frameprob = self._compute_log_likelihood(X)
> log_prob, fwdlattice = _hmmc.forward_log(
self.startprob_, self.transmat_, log_frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:864: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
____ TestGaussianHMMWithDiagonalCovars.test_fit_ignored_init_warns[scaling] ____
self = <hmmlearn.tests.test_gaussian_hmm.TestGaussianHMMWithDiagonalCovars object at 0xffff94b15cd0>
implementation = 'scaling'
caplog = <_pytest.logging.LogCaptureFixture object at 0xffff949156e0>
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_fit_ignored_init_warns(self, implementation, caplog):
# This test occasionally will be flaky in learning the model.
# What is important here, is that the expected log message is produced
# We can test convergence properties elsewhere.
h = hmm.GaussianHMM(self.n_components, self.covariance_type,
implementation=implementation)
h.startprob_ = self.startprob
> h.fit(self.prng.randn(100, self.n_components))
hmmlearn/tests/test_gaussian_hmm.py:128:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GaussianHMM(implementation='scaling', n_components=3)
X = array([[-1.58494101e+00, 1.05535316e+00, -1.92657911e+00],
[ 6.98583878e-01, -7.46201430e-01, -1.56626664e-01]... [ 7.02226595e-01, -9.47509422e-01, -1.16620867e+00],
[ 4.79956068e-01, 3.68105791e-01, 2.45414301e-01]])
def _fit_scaling(self, X):
frameprob = self._compute_likelihood(X)
> log_prob, fwdlattice, scaling_factors = _hmmc.forward_scaling(
self.startprob_, self.transmat_, frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:855: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
------------------------------ Captured log call -------------------------------
WARNING hmmlearn.base:base.py:514 Even though the 'startprob_' attribute is set, it will be overwritten during initialization because 'init_params' contains 's'
______ TestGaussianHMMWithDiagonalCovars.test_fit_ignored_init_warns[log] ______
self = <hmmlearn.tests.test_gaussian_hmm.TestGaussianHMMWithDiagonalCovars object at 0xffff94b15e50>
implementation = 'log'
caplog = <_pytest.logging.LogCaptureFixture object at 0xffff949eb9b0>
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_fit_ignored_init_warns(self, implementation, caplog):
# This test occasionally will be flaky in learning the model.
# What is important here, is that the expected log message is produced
# We can test convergence properties elsewhere.
h = hmm.GaussianHMM(self.n_components, self.covariance_type,
implementation=implementation)
h.startprob_ = self.startprob
> h.fit(self.prng.randn(100, self.n_components))
hmmlearn/tests/test_gaussian_hmm.py:128:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GaussianHMM(n_components=3)
X = array([[-1.58494101e+00, 1.05535316e+00, -1.92657911e+00],
[ 6.98583878e-01, -7.46201430e-01, -1.56626664e-01]... [ 7.02226595e-01, -9.47509422e-01, -1.16620867e+00],
[ 4.79956068e-01, 3.68105791e-01, 2.45414301e-01]])
def _fit_log(self, X):
log_frameprob = self._compute_log_likelihood(X)
> log_prob, fwdlattice = _hmmc.forward_log(
self.startprob_, self.transmat_, log_frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:864: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
------------------------------ Captured log call -------------------------------
WARNING hmmlearn.base:base.py:514 Even though the 'startprob_' attribute is set, it will be overwritten during initialization because 'init_params' contains 's'
_ TestGaussianHMMWithDiagonalCovars.test_fit_sequences_of_different_length[scaling] _
self = <hmmlearn.tests.test_gaussian_hmm.TestGaussianHMMWithDiagonalCovars object at 0xffff94f7d630>
implementation = 'scaling'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_fit_sequences_of_different_length(self, implementation):
lengths = [3, 4, 5]
X = self.prng.rand(sum(lengths), self.n_features)
h = hmm.GaussianHMM(self.n_components, self.covariance_type,
implementation=implementation)
# This shouldn't raise
# ValueError: setting an array element with a sequence.
> h.fit(X, lengths=lengths)
hmmlearn/tests/test_gaussian_hmm.py:157:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GaussianHMM(implementation='scaling', n_components=3)
X = array([[0.76545582, 0.01178803, 0.61194334],
[0.33188226, 0.55964837, 0.33549965],
[0.41118255, 0.0768555 , 0.85304299]])
def _fit_scaling(self, X):
frameprob = self._compute_likelihood(X)
> log_prob, fwdlattice, scaling_factors = _hmmc.forward_scaling(
self.startprob_, self.transmat_, frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:855: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
_ TestGaussianHMMWithDiagonalCovars.test_fit_sequences_of_different_length[log] _
self = <hmmlearn.tests.test_gaussian_hmm.TestGaussianHMMWithDiagonalCovars object at 0xffff94f7d6d0>
implementation = 'log'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_fit_sequences_of_different_length(self, implementation):
lengths = [3, 4, 5]
X = self.prng.rand(sum(lengths), self.n_features)
h = hmm.GaussianHMM(self.n_components, self.covariance_type,
implementation=implementation)
# This shouldn't raise
# ValueError: setting an array element with a sequence.
> h.fit(X, lengths=lengths)
hmmlearn/tests/test_gaussian_hmm.py:157:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GaussianHMM(n_components=3)
X = array([[0.76545582, 0.01178803, 0.61194334],
[0.33188226, 0.55964837, 0.33549965],
[0.41118255, 0.0768555 , 0.85304299]])
def _fit_log(self, X):
log_frameprob = self._compute_log_likelihood(X)
> log_prob, fwdlattice = _hmmc.forward_log(
self.startprob_, self.transmat_, log_frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:864: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
__ TestGaussianHMMWithDiagonalCovars.test_fit_with_length_one_signal[scaling] __
self = <hmmlearn.tests.test_gaussian_hmm.TestGaussianHMMWithDiagonalCovars object at 0xffff94ec1640>
implementation = 'scaling'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_fit_with_length_one_signal(self, implementation):
lengths = [10, 8, 1]
X = self.prng.rand(sum(lengths), self.n_features)
h = hmm.GaussianHMM(self.n_components, self.covariance_type,
implementation=implementation)
# This shouldn't raise
# ValueError: zero-size array to reduction operation maximum which
# has no identity
> h.fit(X, lengths=lengths)
hmmlearn/tests/test_gaussian_hmm.py:169:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GaussianHMM(implementation='scaling', n_components=3)
X = array([[0.76545582, 0.01178803, 0.61194334],
[0.33188226, 0.55964837, 0.33549965],
[0.41118255, 0.076855...7 , 0.13129105, 0.84281793],
[0.6590363 , 0.5954396 , 0.4363537 ],
[0.35625033, 0.58713093, 0.14947134]])
def _fit_scaling(self, X):
frameprob = self._compute_likelihood(X)
> log_prob, fwdlattice, scaling_factors = _hmmc.forward_scaling(
self.startprob_, self.transmat_, frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:855: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
____ TestGaussianHMMWithDiagonalCovars.test_fit_with_length_one_signal[log] ____
self = <hmmlearn.tests.test_gaussian_hmm.TestGaussianHMMWithDiagonalCovars object at 0xffff94f65fd0>
implementation = 'log'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_fit_with_length_one_signal(self, implementation):
lengths = [10, 8, 1]
X = self.prng.rand(sum(lengths), self.n_features)
h = hmm.GaussianHMM(self.n_components, self.covariance_type,
implementation=implementation)
# This shouldn't raise
# ValueError: zero-size array to reduction operation maximum which
# has no identity
> h.fit(X, lengths=lengths)
hmmlearn/tests/test_gaussian_hmm.py:169:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GaussianHMM(n_components=3)
X = array([[0.76545582, 0.01178803, 0.61194334],
[0.33188226, 0.55964837, 0.33549965],
[0.41118255, 0.076855...7 , 0.13129105, 0.84281793],
[0.6590363 , 0.5954396 , 0.4363537 ],
[0.35625033, 0.58713093, 0.14947134]])
def _fit_log(self, X):
log_frameprob = self._compute_log_likelihood(X)
> log_prob, fwdlattice = _hmmc.forward_log(
self.startprob_, self.transmat_, log_frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:864: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
______ TestGaussianHMMWithDiagonalCovars.test_fit_zero_variance[scaling] _______
self = <hmmlearn.tests.test_gaussian_hmm.TestGaussianHMMWithDiagonalCovars object at 0xffff94f66550>
implementation = 'scaling'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_fit_zero_variance(self, implementation):
# Example from issue #2 on GitHub.
X = np.asarray([
[7.15000000e+02, 5.85000000e+02, 0.00000000e+00, 0.00000000e+00],
[7.15000000e+02, 5.20000000e+02, 1.04705811e+00, -6.03696289e+01],
[7.15000000e+02, 4.55000000e+02, 7.20886230e-01, -5.27055664e+01],
[7.15000000e+02, 3.90000000e+02, -4.57946777e-01, -7.80605469e+01],
[7.15000000e+02, 3.25000000e+02, -6.43127441e+00, -5.59954834e+01],
[7.15000000e+02, 2.60000000e+02, -2.90063477e+00, -7.80220947e+01],
[7.15000000e+02, 1.95000000e+02, 8.45532227e+00, -7.03294373e+01],
[7.15000000e+02, 1.30000000e+02, 4.09387207e+00, -5.83621216e+01],
[7.15000000e+02, 6.50000000e+01, -1.21667480e+00, -4.48131409e+01]
])
h = hmm.GaussianHMM(3, self.covariance_type,
implementation=implementation)
> h.fit(X)
hmmlearn/tests/test_gaussian_hmm.py:187:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GaussianHMM(implementation='scaling', n_components=3)
X = array([[ 7.15000000e+02, 5.85000000e+02, 0.00000000e+00,
0.00000000e+00],
[ 7.15000000e+02, 5.20000...07e+00,
-5.83621216e+01],
[ 7.15000000e+02, 6.50000000e+01, -1.21667480e+00,
-4.48131409e+01]])
def _fit_scaling(self, X):
frameprob = self._compute_likelihood(X)
> log_prob, fwdlattice, scaling_factors = _hmmc.forward_scaling(
self.startprob_, self.transmat_, frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:855: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
________ TestGaussianHMMWithDiagonalCovars.test_fit_zero_variance[log] _________
self = <hmmlearn.tests.test_gaussian_hmm.TestGaussianHMMWithDiagonalCovars object at 0xffff94f608a0>
implementation = 'log'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_fit_zero_variance(self, implementation):
# Example from issue #2 on GitHub.
X = np.asarray([
[7.15000000e+02, 5.85000000e+02, 0.00000000e+00, 0.00000000e+00],
[7.15000000e+02, 5.20000000e+02, 1.04705811e+00, -6.03696289e+01],
[7.15000000e+02, 4.55000000e+02, 7.20886230e-01, -5.27055664e+01],
[7.15000000e+02, 3.90000000e+02, -4.57946777e-01, -7.80605469e+01],
[7.15000000e+02, 3.25000000e+02, -6.43127441e+00, -5.59954834e+01],
[7.15000000e+02, 2.60000000e+02, -2.90063477e+00, -7.80220947e+01],
[7.15000000e+02, 1.95000000e+02, 8.45532227e+00, -7.03294373e+01],
[7.15000000e+02, 1.30000000e+02, 4.09387207e+00, -5.83621216e+01],
[7.15000000e+02, 6.50000000e+01, -1.21667480e+00, -4.48131409e+01]
])
h = hmm.GaussianHMM(3, self.covariance_type,
implementation=implementation)
> h.fit(X)
hmmlearn/tests/test_gaussian_hmm.py:187:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GaussianHMM(n_components=3)
X = array([[ 7.15000000e+02, 5.85000000e+02, 0.00000000e+00,
0.00000000e+00],
[ 7.15000000e+02, 5.20000...07e+00,
-5.83621216e+01],
[ 7.15000000e+02, 6.50000000e+01, -1.21667480e+00,
-4.48131409e+01]])
def _fit_log(self, X):
log_frameprob = self._compute_log_likelihood(X)
> log_prob, fwdlattice = _hmmc.forward_log(
self.startprob_, self.transmat_, log_frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:864: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
_______ TestGaussianHMMWithDiagonalCovars.test_fit_with_priors[scaling] ________
self = <hmmlearn.tests.test_gaussian_hmm.TestGaussianHMMWithDiagonalCovars object at 0xffff94f60bb0>
implementation = 'scaling', init_params = 'mc', params = 'stmc', n_iter = 20
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_fit_with_priors(self, implementation, init_params='mc',
params='stmc', n_iter=20):
# We have a few options to make this a robust test, such as
# a. increase the amount of training data to ensure convergence
# b. Only learn some of the parameters (simplify the problem)
# c. Increase the number of iterations
#
# (c) seems to not affect the ci/cd time too much.
startprob_prior = 10 * self.startprob + 2.0
transmat_prior = 10 * self.transmat + 2.0
means_prior = self.means
means_weight = 2.0
covars_weight = 2.0
if self.covariance_type in ('full', 'tied'):
covars_weight += self.n_features
covars_prior = self.covars
h = hmm.GaussianHMM(self.n_components, self.covariance_type,
implementation=implementation)
h.startprob_ = self.startprob
h.startprob_prior = startprob_prior
h.transmat_ = normalized(
self.transmat + np.diag(self.prng.rand(self.n_components)), 1)
h.transmat_prior = transmat_prior
h.means_ = 20 * self.means
h.means_prior = means_prior
h.means_weight = means_weight
h.covars_ = self.covars
h.covars_prior = covars_prior
h.covars_weight = covars_weight
lengths = [200] * 10
X, _state_sequence = h.sample(sum(lengths), random_state=self.prng)
# Re-initialize the parameters and check that we can converge to
# the original parameter values.
h_learn = hmm.GaussianHMM(self.n_components, self.covariance_type,
init_params=init_params, params=params,
implementation=implementation,)
# don't use random parameters for testing
init = 1. / h_learn.n_components
h_learn.startprob_ = np.full(h_learn.n_components, init)
h_learn.transmat_ = \
np.full((h_learn.n_components, h_learn.n_components), init)
h_learn.n_iter = 0
h_learn.fit(X, lengths=lengths)
> assert_log_likelihood_increasing(h_learn, X, lengths, n_iter)
hmmlearn/tests/test_gaussian_hmm.py:237:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/tests/__init__.py:46: in assert_log_likelihood_increasing
h.fit(X, lengths=lengths)
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GaussianHMM(implementation='scaling', init_params='', n_components=3, n_iter=1)
X = array([[-180.10548806, 79.65731382, 260.1106488 ],
[-240.1207402 , 319.97139868, -120.01791514],
[-2...8371198 ],
[-180.26646139, 79.7657748 , 259.85521097],
[-239.93733261, 319.93811216, -119.84462714]])
def _fit_scaling(self, X):
frameprob = self._compute_likelihood(X)
> log_prob, fwdlattice, scaling_factors = _hmmc.forward_scaling(
self.startprob_, self.transmat_, frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:855: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
_________ TestGaussianHMMWithDiagonalCovars.test_fit_with_priors[log] __________
self = <hmmlearn.tests.test_gaussian_hmm.TestGaussianHMMWithDiagonalCovars object at 0xffff94f45970>
implementation = 'log', init_params = 'mc', params = 'stmc', n_iter = 20
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_fit_with_priors(self, implementation, init_params='mc',
params='stmc', n_iter=20):
# We have a few options to make this a robust test, such as
# a. increase the amount of training data to ensure convergence
# b. Only learn some of the parameters (simplify the problem)
# c. Increase the number of iterations
#
# (c) seems to not affect the ci/cd time too much.
startprob_prior = 10 * self.startprob + 2.0
transmat_prior = 10 * self.transmat + 2.0
means_prior = self.means
means_weight = 2.0
covars_weight = 2.0
if self.covariance_type in ('full', 'tied'):
covars_weight += self.n_features
covars_prior = self.covars
h = hmm.GaussianHMM(self.n_components, self.covariance_type,
implementation=implementation)
h.startprob_ = self.startprob
h.startprob_prior = startprob_prior
h.transmat_ = normalized(
self.transmat + np.diag(self.prng.rand(self.n_components)), 1)
h.transmat_prior = transmat_prior
h.means_ = 20 * self.means
h.means_prior = means_prior
h.means_weight = means_weight
h.covars_ = self.covars
h.covars_prior = covars_prior
h.covars_weight = covars_weight
lengths = [200] * 10
X, _state_sequence = h.sample(sum(lengths), random_state=self.prng)
# Re-initialize the parameters and check that we can converge to
# the original parameter values.
h_learn = hmm.GaussianHMM(self.n_components, self.covariance_type,
init_params=init_params, params=params,
implementation=implementation,)
# don't use random parameters for testing
init = 1. / h_learn.n_components
h_learn.startprob_ = np.full(h_learn.n_components, init)
h_learn.transmat_ = \
np.full((h_learn.n_components, h_learn.n_components), init)
h_learn.n_iter = 0
h_learn.fit(X, lengths=lengths)
> assert_log_likelihood_increasing(h_learn, X, lengths, n_iter)
hmmlearn/tests/test_gaussian_hmm.py:237:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/tests/__init__.py:46: in assert_log_likelihood_increasing
h.fit(X, lengths=lengths)
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GaussianHMM(init_params='', n_components=3, n_iter=1)
X = array([[-180.10548806, 79.65731382, 260.1106488 ],
[-240.1207402 , 319.97139868, -120.01791514],
[-2...8371198 ],
[-180.26646139, 79.7657748 , 259.85521097],
[-239.93733261, 319.93811216, -119.84462714]])
def _fit_log(self, X):
log_frameprob = self._compute_log_likelihood(X)
> log_prob, fwdlattice = _hmmc.forward_log(
self.startprob_, self.transmat_, log_frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:864: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
________ TestGaussianHMMWithDiagonalCovars.test_fit_left_right[scaling] ________
self = <hmmlearn.tests.test_gaussian_hmm.TestGaussianHMMWithDiagonalCovars object at 0xffff94f548a0>
implementation = 'scaling'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_fit_left_right(self, implementation):
transmat = np.zeros((self.n_components, self.n_components))
# Left-to-right: each state is connected to itself and its
# direct successor.
for i in range(self.n_components):
if i == self.n_components - 1:
transmat[i, i] = 1.0
else:
transmat[i, i] = transmat[i, i + 1] = 0.5
# Always start in first state
startprob = np.zeros(self.n_components)
startprob[0] = 1.0
lengths = [10, 8, 1]
X = self.prng.rand(sum(lengths), self.n_features)
h = hmm.GaussianHMM(self.n_components, covariance_type="diag",
params="mct", init_params="cm",
implementation=implementation)
h.startprob_ = startprob.copy()
h.transmat_ = transmat.copy()
> h.fit(X)
hmmlearn/tests/test_gaussian_hmm.py:343:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GaussianHMM(implementation='scaling', init_params='cm', n_components=3,
params='mct')
X = array([[0.76545582, 0.01178803, 0.61194334],
[0.33188226, 0.55964837, 0.33549965],
[0.41118255, 0.076855...88, 0.35095822, 0.70533161],
[0.82070374, 0.134563 , 0.60472616],
[0.28314828, 0.50640782, 0.03846043]])
def _fit_scaling(self, X):
frameprob = self._compute_likelihood(X)
> log_prob, fwdlattice, scaling_factors = _hmmc.forward_scaling(
self.startprob_, self.transmat_, frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:855: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
__________ TestGaussianHMMWithDiagonalCovars.test_fit_left_right[log] __________
self = <hmmlearn.tests.test_gaussian_hmm.TestGaussianHMMWithDiagonalCovars object at 0xffff94f54770>
implementation = 'log'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_fit_left_right(self, implementation):
transmat = np.zeros((self.n_components, self.n_components))
# Left-to-right: each state is connected to itself and its
# direct successor.
for i in range(self.n_components):
if i == self.n_components - 1:
transmat[i, i] = 1.0
else:
transmat[i, i] = transmat[i, i + 1] = 0.5
# Always start in first state
startprob = np.zeros(self.n_components)
startprob[0] = 1.0
lengths = [10, 8, 1]
X = self.prng.rand(sum(lengths), self.n_features)
h = hmm.GaussianHMM(self.n_components, covariance_type="diag",
params="mct", init_params="cm",
implementation=implementation)
h.startprob_ = startprob.copy()
h.transmat_ = transmat.copy()
> h.fit(X)
hmmlearn/tests/test_gaussian_hmm.py:343:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GaussianHMM(init_params='cm', n_components=3, params='mct')
X = array([[0.76545582, 0.01178803, 0.61194334],
[0.33188226, 0.55964837, 0.33549965],
[0.41118255, 0.076855...88, 0.35095822, 0.70533161],
[0.82070374, 0.134563 , 0.60472616],
[0.28314828, 0.50640782, 0.03846043]])
def _fit_log(self, X):
log_frameprob = self._compute_log_likelihood(X)
> log_prob, fwdlattice = _hmmc.forward_log(
self.startprob_, self.transmat_, log_frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:864: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
_____ TestGaussianHMMWithTiedCovars.test_score_samples_and_decode[scaling] _____
self = <hmmlearn.tests.test_gaussian_hmm.TestGaussianHMMWithTiedCovars object at 0xffff94f54c30>
implementation = 'scaling'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_score_samples_and_decode(self, implementation):
h = hmm.GaussianHMM(self.n_components, self.covariance_type,
init_params="st", implementation=implementation)
h.means_ = self.means
h.covars_ = self.covars
# Make sure the means are far apart so posteriors.argmax()
# picks the actual component used to generate the observations.
h.means_ = 20 * h.means_
gaussidx = np.repeat(np.arange(self.n_components), 5)
n_samples = len(gaussidx)
X = (self.prng.randn(n_samples, self.n_features)
+ h.means_[gaussidx])
h._init(X, [n_samples])
> ll, posteriors = h.score_samples(X)
hmmlearn/tests/test_gaussian_hmm.py:52:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:201: in score_samples
return self._score(X, lengths, compute_posteriors=True)
hmmlearn/base.py:244: in _score
return impl(
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GaussianHMM(covariance_type='tied', implementation='scaling', init_params='st',
n_components=3)
X = array([[-178.59797475, 79.5657409 , 258.74575809],
[-179.66842145, 79.69139951, 259.84626451],
[-1...49160395],
[-141.24628501, -300.37993208, -140.27125813],
[-141.45463451, -299.54832455, -138.87519327]])
lengths = None
def _score_scaling(self, X, lengths=None, *, compute_posteriors):
log_prob = 0
sub_posteriors = [np.empty((0, self.n_components))]
for sub_X in _utils.split_X_lengths(X, lengths):
frameprob = self._compute_likelihood(sub_X)
> log_probij, fwdlattice, scaling_factors = _hmmc.forward_scaling(
self.startprob_, self.transmat_, frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:272: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
_______ TestGaussianHMMWithTiedCovars.test_score_samples_and_decode[log] _______
self = <hmmlearn.tests.test_gaussian_hmm.TestGaussianHMMWithTiedCovars object at 0xffff94f54d60>
implementation = 'log'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_score_samples_and_decode(self, implementation):
h = hmm.GaussianHMM(self.n_components, self.covariance_type,
init_params="st", implementation=implementation)
h.means_ = self.means
h.covars_ = self.covars
# Make sure the means are far apart so posteriors.argmax()
# picks the actual component used to generate the observations.
h.means_ = 20 * h.means_
gaussidx = np.repeat(np.arange(self.n_components), 5)
n_samples = len(gaussidx)
X = (self.prng.randn(n_samples, self.n_features)
+ h.means_[gaussidx])
h._init(X, [n_samples])
> ll, posteriors = h.score_samples(X)
hmmlearn/tests/test_gaussian_hmm.py:52:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:201: in score_samples
return self._score(X, lengths, compute_posteriors=True)
hmmlearn/base.py:244: in _score
return impl(
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GaussianHMM(covariance_type='tied', init_params='st', n_components=3)
X = array([[-178.59797475, 79.5657409 , 258.74575809],
[-179.66842145, 79.69139951, 259.84626451],
[-1...49160395],
[-141.24628501, -300.37993208, -140.27125813],
[-141.45463451, -299.54832455, -138.87519327]])
lengths = None
def _score_log(self, X, lengths=None, *, compute_posteriors):
"""
Compute the log probability under the model, as well as posteriors if
*compute_posteriors* is True (otherwise, an empty array is returned
for the latter).
"""
log_prob = 0
sub_posteriors = [np.empty((0, self.n_components))]
for sub_X in _utils.split_X_lengths(X, lengths):
log_frameprob = self._compute_log_likelihood(sub_X)
> log_probij, fwdlattice = _hmmc.forward_log(
self.startprob_, self.transmat_, log_frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:257: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
_______________ TestGaussianHMMWithTiedCovars.test_fit[scaling] ________________
self = <hmmlearn.tests.test_gaussian_hmm.TestGaussianHMMWithTiedCovars object at 0xffff94f01370>
implementation = 'scaling', params = 'stmc', n_iter = 5, kwargs = {}
h = GaussianHMM(covariance_type='tied', implementation='scaling', n_components=3)
lengths = [10, 10, 10, 10, 10, 10, ...]
X = array([[-179.41146631, 80.33324746, 259.76933407],
[-176.75522731, 79.71012749, 258.18265165],
[-2...09616848],
[-141.31913419, -300.07886934, -139.15691988],
[-239.52207412, 320.06899962, -119.91036927]])
_state_sequence = array([0, 0, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, 1, 1, 0, 0, 0, 0, 1, 1, 1, 2,
2, 2, 1, 1, 1, 1, 2, 2, 2, 2, 2, 2, 2,...1,
1, 1, 1, 0, 1, 1, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1,
1, 1, 0, 0, 0, 0, 0, 1, 1, 0, 2, 1])
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_fit(self, implementation, params='stmc', n_iter=5, **kwargs):
h = hmm.GaussianHMM(self.n_components, self.covariance_type,
implementation=implementation)
h.startprob_ = self.startprob
h.transmat_ = normalized(
self.transmat + np.diag(self.prng.rand(self.n_components)), 1)
h.means_ = 20 * self.means
h.covars_ = self.covars
lengths = [10] * 10
X, _state_sequence = h.sample(sum(lengths), random_state=self.prng)
# Mess up the parameters and see if we can re-learn them.
# TODO: change the params and uncomment the check
> h.fit(X, lengths=lengths)
hmmlearn/tests/test_gaussian_hmm.py:89:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GaussianHMM(covariance_type='tied', implementation='scaling', n_components=3)
X = array([[-179.41146631, 80.33324746, 259.76933407],
[-176.75522731, 79.71012749, 258.18265165],
[-2...14327792],
[-178.06475112, 79.6954867 , 259.46980914],
[-239.98187589, 319.8163777 , -120.70908873]])
def _fit_scaling(self, X):
frameprob = self._compute_likelihood(X)
> log_prob, fwdlattice, scaling_factors = _hmmc.forward_scaling(
self.startprob_, self.transmat_, frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:855: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
------------------------------ Captured log call -------------------------------
WARNING hmmlearn.base:base.py:514 Even though the 'startprob_' attribute is set, it will be overwritten during initialization because 'init_params' contains 's'
WARNING hmmlearn.base:base.py:514 Even though the 'transmat_' attribute is set, it will be overwritten during initialization because 'init_params' contains 't'
WARNING hmmlearn.base:base.py:514 Even though the 'means_' attribute is set, it will be overwritten during initialization because 'init_params' contains 'm'
WARNING hmmlearn.base:base.py:514 Even though the 'covars_' attribute is set, it will be overwritten during initialization because 'init_params' contains 'c'
_________________ TestGaussianHMMWithTiedCovars.test_fit[log] __________________
self = <hmmlearn.tests.test_gaussian_hmm.TestGaussianHMMWithTiedCovars object at 0xffff954a7750>
implementation = 'log', params = 'stmc', n_iter = 5, kwargs = {}
h = GaussianHMM(covariance_type='tied', n_components=3)
lengths = [10, 10, 10, 10, 10, 10, ...]
X = array([[-180.0224808 , 79.32859982, 259.7961988 ],
[-179.31925964, 77.64156545, 257.31647345],
[-2...17633544],
[-140.33037743, -298.87711678, -139.0284141 ],
[-240.21399682, 319.66678418, -120.26050279]])
_state_sequence = array([0, 0, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, 1, 1, 0, 0, 0, 0, 1, 1, 1, 2,
2, 2, 1, 1, 1, 1, 2, 2, 2, 2, 2, 2, 2,...1,
1, 1, 1, 0, 1, 1, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1,
1, 1, 0, 0, 0, 0, 0, 1, 1, 0, 2, 1])
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_fit(self, implementation, params='stmc', n_iter=5, **kwargs):
h = hmm.GaussianHMM(self.n_components, self.covariance_type,
implementation=implementation)
h.startprob_ = self.startprob
h.transmat_ = normalized(
self.transmat + np.diag(self.prng.rand(self.n_components)), 1)
h.means_ = 20 * self.means
h.covars_ = self.covars
lengths = [10] * 10
X, _state_sequence = h.sample(sum(lengths), random_state=self.prng)
# Mess up the parameters and see if we can re-learn them.
# TODO: change the params and uncomment the check
> h.fit(X, lengths=lengths)
hmmlearn/tests/test_gaussian_hmm.py:89:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GaussianHMM(covariance_type='tied', n_components=3)
X = array([[-180.0224808 , 79.32859982, 259.7961988 ],
[-179.31925964, 77.64156545, 257.31647345],
[-2...67528152],
[-180.03041427, 78.82788094, 258.3974288 ],
[-239.38751943, 319.9336772 , -120.25333739]])
def _fit_log(self, X):
log_frameprob = self._compute_log_likelihood(X)
> log_prob, fwdlattice = _hmmc.forward_log(
self.startprob_, self.transmat_, log_frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:864: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
------------------------------ Captured log call -------------------------------
WARNING hmmlearn.base:base.py:514 Even though the 'startprob_' attribute is set, it will be overwritten during initialization because 'init_params' contains 's'
WARNING hmmlearn.base:base.py:514 Even though the 'transmat_' attribute is set, it will be overwritten during initialization because 'init_params' contains 't'
WARNING hmmlearn.base:base.py:514 Even though the 'means_' attribute is set, it will be overwritten during initialization because 'init_params' contains 'm'
WARNING hmmlearn.base:base.py:514 Even though the 'covars_' attribute is set, it will be overwritten during initialization because 'init_params' contains 'c'
____________ TestGaussianHMMWithTiedCovars.test_criterion[scaling] _____________
self = <hmmlearn.tests.test_gaussian_hmm.TestGaussianHMMWithTiedCovars object at 0xffff954a7850>
implementation = 'scaling'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_criterion(self, implementation):
random_state = check_random_state(42)
m1 = hmm.GaussianHMM(self.n_components, init_params="",
covariance_type=self.covariance_type)
m1.startprob_ = self.startprob
m1.transmat_ = self.transmat
m1.means_ = self.means * 10
m1.covars_ = self.covars
X, _ = m1.sample(2000, random_state=random_state)
aic = []
bic = []
ns = [2, 3, 4]
for n in ns:
h = hmm.GaussianHMM(n, self.covariance_type, n_iter=500,
random_state=random_state, implementation=implementation)
> h.fit(X)
hmmlearn/tests/test_gaussian_hmm.py:110:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GaussianHMM(covariance_type='tied', implementation='scaling', n_components=2,
n_iter=500, random_state=RandomState(MT19937) at 0xFFFF9488D240)
X = array([[ -88.2404408 , 38.77300426, 130.20714739],
[-121.65522742, 161.09144738, -59.6686406 ],
[-1...73781105],
[-118.28001014, 160.32088887, -60.219151 ],
[ -89.95949436, 40.65320232, 129.40891014]])
def _fit_scaling(self, X):
frameprob = self._compute_likelihood(X)
> log_prob, fwdlattice, scaling_factors = _hmmc.forward_scaling(
self.startprob_, self.transmat_, frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:855: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
______________ TestGaussianHMMWithTiedCovars.test_criterion[log] _______________
self = <hmmlearn.tests.test_gaussian_hmm.TestGaussianHMMWithTiedCovars object at 0xffff94e99c70>
implementation = 'log'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_criterion(self, implementation):
random_state = check_random_state(42)
m1 = hmm.GaussianHMM(self.n_components, init_params="",
covariance_type=self.covariance_type)
m1.startprob_ = self.startprob
m1.transmat_ = self.transmat
m1.means_ = self.means * 10
m1.covars_ = self.covars
X, _ = m1.sample(2000, random_state=random_state)
aic = []
bic = []
ns = [2, 3, 4]
for n in ns:
h = hmm.GaussianHMM(n, self.covariance_type, n_iter=500,
random_state=random_state, implementation=implementation)
> h.fit(X)
hmmlearn/tests/test_gaussian_hmm.py:110:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GaussianHMM(covariance_type='tied', n_components=2, n_iter=500,
random_state=RandomState(MT19937) at 0xFFFF94884340)
X = array([[ -87.86467411, 39.96682958, 129.6412005 ],
[-121.86926167, 160.42828162, -59.36599984],
[-1...42295617],
[-119.00752326, 160.44136996, -61.39907298],
[ -90.51693579, 39.87843881, 129.31938765]])
def _fit_log(self, X):
log_frameprob = self._compute_log_likelihood(X)
> log_prob, fwdlattice = _hmmc.forward_log(
self.startprob_, self.transmat_, log_frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:864: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
______ TestGaussianHMMWithTiedCovars.test_fit_ignored_init_warns[scaling] ______
self = <hmmlearn.tests.test_gaussian_hmm.TestGaussianHMMWithTiedCovars object at 0xffff94e99d60>
implementation = 'scaling'
caplog = <_pytest.logging.LogCaptureFixture object at 0xffff94886050>
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_fit_ignored_init_warns(self, implementation, caplog):
# This test occasionally will be flaky in learning the model.
# What is important here, is that the expected log message is produced
# We can test convergence properties elsewhere.
h = hmm.GaussianHMM(self.n_components, self.covariance_type,
implementation=implementation)
h.startprob_ = self.startprob
> h.fit(self.prng.randn(100, self.n_components))
hmmlearn/tests/test_gaussian_hmm.py:128:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GaussianHMM(covariance_type='tied', implementation='scaling', n_components=3)
X = array([[ 1.40202525e+00, -4.34259100e-01, -1.25424191e+00],
[ 3.31578554e-01, -3.08600486e-01, -1.53735485e-01]... [ 6.04857190e-01, -2.51936017e-01, 8.99130290e-01],
[ 1.60788687e+00, -1.30106516e+00, 7.60125909e-01]])
def _fit_scaling(self, X):
frameprob = self._compute_likelihood(X)
> log_prob, fwdlattice, scaling_factors = _hmmc.forward_scaling(
self.startprob_, self.transmat_, frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:855: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
------------------------------ Captured log call -------------------------------
WARNING hmmlearn.base:base.py:514 Even though the 'startprob_' attribute is set, it will be overwritten during initialization because 'init_params' contains 's'
________ TestGaussianHMMWithTiedCovars.test_fit_ignored_init_warns[log] ________
self = <hmmlearn.tests.test_gaussian_hmm.TestGaussianHMMWithTiedCovars object at 0xffff94f75e10>
implementation = 'log'
caplog = <_pytest.logging.LogCaptureFixture object at 0xffff94886550>
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_fit_ignored_init_warns(self, implementation, caplog):
# This test occasionally will be flaky in learning the model.
# What is important here, is that the expected log message is produced
# We can test convergence properties elsewhere.
h = hmm.GaussianHMM(self.n_components, self.covariance_type,
implementation=implementation)
h.startprob_ = self.startprob
> h.fit(self.prng.randn(100, self.n_components))
hmmlearn/tests/test_gaussian_hmm.py:128:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GaussianHMM(covariance_type='tied', n_components=3)
X = array([[ 1.40202525e+00, -4.34259100e-01, -1.25424191e+00],
[ 3.31578554e-01, -3.08600486e-01, -1.53735485e-01]... [ 6.04857190e-01, -2.51936017e-01, 8.99130290e-01],
[ 1.60788687e+00, -1.30106516e+00, 7.60125909e-01]])
def _fit_log(self, X):
log_frameprob = self._compute_log_likelihood(X)
> log_prob, fwdlattice = _hmmc.forward_log(
self.startprob_, self.transmat_, log_frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:864: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
------------------------------ Captured log call -------------------------------
WARNING hmmlearn.base:base.py:514 Even though the 'startprob_' attribute is set, it will be overwritten during initialization because 'init_params' contains 's'
_ TestGaussianHMMWithTiedCovars.test_fit_sequences_of_different_length[scaling] _
self = <hmmlearn.tests.test_gaussian_hmm.TestGaussianHMMWithTiedCovars object at 0xffff94b15010>
implementation = 'scaling'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_fit_sequences_of_different_length(self, implementation):
lengths = [3, 4, 5]
X = self.prng.rand(sum(lengths), self.n_features)
h = hmm.GaussianHMM(self.n_components, self.covariance_type,
implementation=implementation)
# This shouldn't raise
# ValueError: setting an array element with a sequence.
> h.fit(X, lengths=lengths)
hmmlearn/tests/test_gaussian_hmm.py:157:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GaussianHMM(covariance_type='tied', implementation='scaling', n_components=3)
X = array([[0.41366737, 0.77872881, 0.58390137],
[0.18263144, 0.82608225, 0.10540183],
[0.28357668, 0.06556327, 0.05644419]])
def _fit_scaling(self, X):
frameprob = self._compute_likelihood(X)
> log_prob, fwdlattice, scaling_factors = _hmmc.forward_scaling(
self.startprob_, self.transmat_, frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:855: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
__ TestGaussianHMMWithTiedCovars.test_fit_sequences_of_different_length[log] ___
self = <hmmlearn.tests.test_gaussian_hmm.TestGaussianHMMWithTiedCovars object at 0xffff94b14b90>
implementation = 'log'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_fit_sequences_of_different_length(self, implementation):
lengths = [3, 4, 5]
X = self.prng.rand(sum(lengths), self.n_features)
h = hmm.GaussianHMM(self.n_components, self.covariance_type,
implementation=implementation)
# This shouldn't raise
# ValueError: setting an array element with a sequence.
> h.fit(X, lengths=lengths)
hmmlearn/tests/test_gaussian_hmm.py:157:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GaussianHMM(covariance_type='tied', n_components=3)
X = array([[0.41366737, 0.77872881, 0.58390137],
[0.18263144, 0.82608225, 0.10540183],
[0.28357668, 0.06556327, 0.05644419]])
def _fit_log(self, X):
log_frameprob = self._compute_log_likelihood(X)
> log_prob, fwdlattice = _hmmc.forward_log(
self.startprob_, self.transmat_, log_frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:864: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
____ TestGaussianHMMWithTiedCovars.test_fit_with_length_one_signal[scaling] ____
self = <hmmlearn.tests.test_gaussian_hmm.TestGaussianHMMWithTiedCovars object at 0xffff94b1cb50>
implementation = 'scaling'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_fit_with_length_one_signal(self, implementation):
lengths = [10, 8, 1]
X = self.prng.rand(sum(lengths), self.n_features)
h = hmm.GaussianHMM(self.n_components, self.covariance_type,
implementation=implementation)
# This shouldn't raise
# ValueError: zero-size array to reduction operation maximum which
# has no identity
> h.fit(X, lengths=lengths)
hmmlearn/tests/test_gaussian_hmm.py:169:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GaussianHMM(covariance_type='tied', implementation='scaling', n_components=3)
X = array([[0.41366737, 0.77872881, 0.58390137],
[0.18263144, 0.82608225, 0.10540183],
[0.28357668, 0.065563...47, 0.76688005, 0.83198977],
[0.30977806, 0.59758229, 0.87239246],
[0.98302087, 0.46740328, 0.87574449]])
def _fit_scaling(self, X):
frameprob = self._compute_likelihood(X)
> log_prob, fwdlattice, scaling_factors = _hmmc.forward_scaling(
self.startprob_, self.transmat_, frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:855: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
______ TestGaussianHMMWithTiedCovars.test_fit_with_length_one_signal[log] ______
self = <hmmlearn.tests.test_gaussian_hmm.TestGaussianHMMWithTiedCovars object at 0xffff94b1caa0>
implementation = 'log'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_fit_with_length_one_signal(self, implementation):
lengths = [10, 8, 1]
X = self.prng.rand(sum(lengths), self.n_features)
h = hmm.GaussianHMM(self.n_components, self.covariance_type,
implementation=implementation)
# This shouldn't raise
# ValueError: zero-size array to reduction operation maximum which
# has no identity
> h.fit(X, lengths=lengths)
hmmlearn/tests/test_gaussian_hmm.py:169:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GaussianHMM(covariance_type='tied', n_components=3)
X = array([[0.41366737, 0.77872881, 0.58390137],
[0.18263144, 0.82608225, 0.10540183],
[0.28357668, 0.065563...47, 0.76688005, 0.83198977],
[0.30977806, 0.59758229, 0.87239246],
[0.98302087, 0.46740328, 0.87574449]])
def _fit_log(self, X):
log_frameprob = self._compute_log_likelihood(X)
> log_prob, fwdlattice = _hmmc.forward_log(
self.startprob_, self.transmat_, log_frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:864: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
________ TestGaussianHMMWithTiedCovars.test_fit_zero_variance[scaling] _________
self = <hmmlearn.tests.test_gaussian_hmm.TestGaussianHMMWithTiedCovars object at 0xffff94f7e0d0>
implementation = 'scaling'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_fit_zero_variance(self, implementation):
# Example from issue #2 on GitHub.
X = np.asarray([
[7.15000000e+02, 5.85000000e+02, 0.00000000e+00, 0.00000000e+00],
[7.15000000e+02, 5.20000000e+02, 1.04705811e+00, -6.03696289e+01],
[7.15000000e+02, 4.55000000e+02, 7.20886230e-01, -5.27055664e+01],
[7.15000000e+02, 3.90000000e+02, -4.57946777e-01, -7.80605469e+01],
[7.15000000e+02, 3.25000000e+02, -6.43127441e+00, -5.59954834e+01],
[7.15000000e+02, 2.60000000e+02, -2.90063477e+00, -7.80220947e+01],
[7.15000000e+02, 1.95000000e+02, 8.45532227e+00, -7.03294373e+01],
[7.15000000e+02, 1.30000000e+02, 4.09387207e+00, -5.83621216e+01],
[7.15000000e+02, 6.50000000e+01, -1.21667480e+00, -4.48131409e+01]
])
h = hmm.GaussianHMM(3, self.covariance_type,
implementation=implementation)
> h.fit(X)
hmmlearn/tests/test_gaussian_hmm.py:187:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GaussianHMM(covariance_type='tied', implementation='scaling', n_components=3)
X = array([[ 7.15000000e+02, 5.85000000e+02, 0.00000000e+00,
0.00000000e+00],
[ 7.15000000e+02, 5.20000...07e+00,
-5.83621216e+01],
[ 7.15000000e+02, 6.50000000e+01, -1.21667480e+00,
-4.48131409e+01]])
def _fit_scaling(self, X):
frameprob = self._compute_likelihood(X)
> log_prob, fwdlattice, scaling_factors = _hmmc.forward_scaling(
self.startprob_, self.transmat_, frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:855: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
__________ TestGaussianHMMWithTiedCovars.test_fit_zero_variance[log] ___________
self = <hmmlearn.tests.test_gaussian_hmm.TestGaussianHMMWithTiedCovars object at 0xffff94f7e170>
implementation = 'log'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_fit_zero_variance(self, implementation):
# Example from issue #2 on GitHub.
X = np.asarray([
[7.15000000e+02, 5.85000000e+02, 0.00000000e+00, 0.00000000e+00],
[7.15000000e+02, 5.20000000e+02, 1.04705811e+00, -6.03696289e+01],
[7.15000000e+02, 4.55000000e+02, 7.20886230e-01, -5.27055664e+01],
[7.15000000e+02, 3.90000000e+02, -4.57946777e-01, -7.80605469e+01],
[7.15000000e+02, 3.25000000e+02, -6.43127441e+00, -5.59954834e+01],
[7.15000000e+02, 2.60000000e+02, -2.90063477e+00, -7.80220947e+01],
[7.15000000e+02, 1.95000000e+02, 8.45532227e+00, -7.03294373e+01],
[7.15000000e+02, 1.30000000e+02, 4.09387207e+00, -5.83621216e+01],
[7.15000000e+02, 6.50000000e+01, -1.21667480e+00, -4.48131409e+01]
])
h = hmm.GaussianHMM(3, self.covariance_type,
implementation=implementation)
> h.fit(X)
hmmlearn/tests/test_gaussian_hmm.py:187:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GaussianHMM(covariance_type='tied', n_components=3)
X = array([[ 7.15000000e+02, 5.85000000e+02, 0.00000000e+00,
0.00000000e+00],
[ 7.15000000e+02, 5.20000...07e+00,
-5.83621216e+01],
[ 7.15000000e+02, 6.50000000e+01, -1.21667480e+00,
-4.48131409e+01]])
def _fit_log(self, X):
log_frameprob = self._compute_log_likelihood(X)
> log_prob, fwdlattice = _hmmc.forward_log(
self.startprob_, self.transmat_, log_frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:864: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
_________ TestGaussianHMMWithTiedCovars.test_fit_with_priors[scaling] __________
self = <hmmlearn.tests.test_gaussian_hmm.TestGaussianHMMWithTiedCovars object at 0xffff94ec1880>
implementation = 'scaling', init_params = 'mc', params = 'stmc', n_iter = 20
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_fit_with_priors(self, implementation, init_params='mc',
params='stmc', n_iter=20):
# We have a few options to make this a robust test, such as
# a. increase the amount of training data to ensure convergence
# b. Only learn some of the parameters (simplify the problem)
# c. Increase the number of iterations
#
# (c) seems to not affect the ci/cd time too much.
startprob_prior = 10 * self.startprob + 2.0
transmat_prior = 10 * self.transmat + 2.0
means_prior = self.means
means_weight = 2.0
covars_weight = 2.0
if self.covariance_type in ('full', 'tied'):
covars_weight += self.n_features
covars_prior = self.covars
h = hmm.GaussianHMM(self.n_components, self.covariance_type,
implementation=implementation)
h.startprob_ = self.startprob
h.startprob_prior = startprob_prior
h.transmat_ = normalized(
self.transmat + np.diag(self.prng.rand(self.n_components)), 1)
h.transmat_prior = transmat_prior
h.means_ = 20 * self.means
h.means_prior = means_prior
h.means_weight = means_weight
h.covars_ = self.covars
h.covars_prior = covars_prior
h.covars_weight = covars_weight
lengths = [200] * 10
X, _state_sequence = h.sample(sum(lengths), random_state=self.prng)
# Re-initialize the parameters and check that we can converge to
# the original parameter values.
h_learn = hmm.GaussianHMM(self.n_components, self.covariance_type,
init_params=init_params, params=params,
implementation=implementation,)
# don't use random parameters for testing
init = 1. / h_learn.n_components
h_learn.startprob_ = np.full(h_learn.n_components, init)
h_learn.transmat_ = \
np.full((h_learn.n_components, h_learn.n_components), init)
h_learn.n_iter = 0
h_learn.fit(X, lengths=lengths)
> assert_log_likelihood_increasing(h_learn, X, lengths, n_iter)
hmmlearn/tests/test_gaussian_hmm.py:237:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/tests/__init__.py:46: in assert_log_likelihood_increasing
h.fit(X, lengths=lengths)
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GaussianHMM(covariance_type='tied', implementation='scaling', init_params='',
n_components=3, n_iter=1)
X = array([[-180.19663229, 79.31045579, 260.17680555],
[-178.69131998, 77.00020501, 261.56152493],
[-2...72885654],
[-241.11556898, 321.43375394, -120.15811275],
[-241.02910075, 322.80332307, -119.20017802]])
def _fit_scaling(self, X):
frameprob = self._compute_likelihood(X)
> log_prob, fwdlattice, scaling_factors = _hmmc.forward_scaling(
self.startprob_, self.transmat_, frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:855: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
___________ TestGaussianHMMWithTiedCovars.test_fit_with_priors[log] ____________
self = <hmmlearn.tests.test_gaussian_hmm.TestGaussianHMMWithTiedCovars object at 0xffff94f5d8d0>
implementation = 'log', init_params = 'mc', params = 'stmc', n_iter = 20
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_fit_with_priors(self, implementation, init_params='mc',
params='stmc', n_iter=20):
# We have a few options to make this a robust test, such as
# a. increase the amount of training data to ensure convergence
# b. Only learn some of the parameters (simplify the problem)
# c. Increase the number of iterations
#
# (c) seems to not affect the ci/cd time too much.
startprob_prior = 10 * self.startprob + 2.0
transmat_prior = 10 * self.transmat + 2.0
means_prior = self.means
means_weight = 2.0
covars_weight = 2.0
if self.covariance_type in ('full', 'tied'):
covars_weight += self.n_features
covars_prior = self.covars
h = hmm.GaussianHMM(self.n_components, self.covariance_type,
implementation=implementation)
h.startprob_ = self.startprob
h.startprob_prior = startprob_prior
h.transmat_ = normalized(
self.transmat + np.diag(self.prng.rand(self.n_components)), 1)
h.transmat_prior = transmat_prior
h.means_ = 20 * self.means
h.means_prior = means_prior
h.means_weight = means_weight
h.covars_ = self.covars
h.covars_prior = covars_prior
h.covars_weight = covars_weight
lengths = [200] * 10
X, _state_sequence = h.sample(sum(lengths), random_state=self.prng)
# Re-initialize the parameters and check that we can converge to
# the original parameter values.
h_learn = hmm.GaussianHMM(self.n_components, self.covariance_type,
init_params=init_params, params=params,
implementation=implementation,)
# don't use random parameters for testing
init = 1. / h_learn.n_components
h_learn.startprob_ = np.full(h_learn.n_components, init)
h_learn.transmat_ = \
np.full((h_learn.n_components, h_learn.n_components), init)
h_learn.n_iter = 0
h_learn.fit(X, lengths=lengths)
> assert_log_likelihood_increasing(h_learn, X, lengths, n_iter)
hmmlearn/tests/test_gaussian_hmm.py:237:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/tests/__init__.py:46: in assert_log_likelihood_increasing
h.fit(X, lengths=lengths)
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GaussianHMM(covariance_type='tied', init_params='', n_components=3, n_iter=1)
X = array([[-180.25315051, 79.63647536, 259.50318345],
[-179.59032769, 77.37908629, 257.63045708],
[-2...28361575],
[-240.66358148, 320.82956045, -118.58194222],
[-240.60549373, 320.47918473, -117.12481366]])
def _fit_log(self, X):
log_frameprob = self._compute_log_likelihood(X)
> log_prob, fwdlattice = _hmmc.forward_log(
self.startprob_, self.transmat_, log_frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:864: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
_____ TestGaussianHMMWithFullCovars.test_score_samples_and_decode[scaling] _____
self = <hmmlearn.tests.test_gaussian_hmm.TestGaussianHMMWithFullCovars object at 0xffff94f54e90>
implementation = 'scaling'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_score_samples_and_decode(self, implementation):
h = hmm.GaussianHMM(self.n_components, self.covariance_type,
init_params="st", implementation=implementation)
h.means_ = self.means
h.covars_ = self.covars
# Make sure the means are far apart so posteriors.argmax()
# picks the actual component used to generate the observations.
h.means_ = 20 * h.means_
gaussidx = np.repeat(np.arange(self.n_components), 5)
n_samples = len(gaussidx)
X = (self.prng.randn(n_samples, self.n_features)
+ h.means_[gaussidx])
h._init(X, [n_samples])
> ll, posteriors = h.score_samples(X)
hmmlearn/tests/test_gaussian_hmm.py:52:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:201: in score_samples
return self._score(X, lengths, compute_posteriors=True)
hmmlearn/base.py:244: in _score
return impl(
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GaussianHMM(covariance_type='full', implementation='scaling', init_params='st',
n_components=3)
X = array([[-178.91762365, 78.21428218, 259.72766302],
[-180.29036839, 81.65614717, 258.7654313 ],
[-1...12852298],
[-138.75200849, -298.93773986, -141.62863338],
[-139.88757229, -300.99150251, -139.32120466]])
lengths = None
def _score_scaling(self, X, lengths=None, *, compute_posteriors):
log_prob = 0
sub_posteriors = [np.empty((0, self.n_components))]
for sub_X in _utils.split_X_lengths(X, lengths):
frameprob = self._compute_likelihood(sub_X)
> log_probij, fwdlattice, scaling_factors = _hmmc.forward_scaling(
self.startprob_, self.transmat_, frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:272: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
_______ TestGaussianHMMWithFullCovars.test_score_samples_and_decode[log] _______
self = <hmmlearn.tests.test_gaussian_hmm.TestGaussianHMMWithFullCovars object at 0xffff94f54fc0>
implementation = 'log'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_score_samples_and_decode(self, implementation):
h = hmm.GaussianHMM(self.n_components, self.covariance_type,
init_params="st", implementation=implementation)
h.means_ = self.means
h.covars_ = self.covars
# Make sure the means are far apart so posteriors.argmax()
# picks the actual component used to generate the observations.
h.means_ = 20 * h.means_
gaussidx = np.repeat(np.arange(self.n_components), 5)
n_samples = len(gaussidx)
X = (self.prng.randn(n_samples, self.n_features)
+ h.means_[gaussidx])
h._init(X, [n_samples])
> ll, posteriors = h.score_samples(X)
hmmlearn/tests/test_gaussian_hmm.py:52:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:201: in score_samples
return self._score(X, lengths, compute_posteriors=True)
hmmlearn/base.py:244: in _score
return impl(
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GaussianHMM(covariance_type='full', init_params='st', n_components=3)
X = array([[-178.91762365, 78.21428218, 259.72766302],
[-180.29036839, 81.65614717, 258.7654313 ],
[-1...12852298],
[-138.75200849, -298.93773986, -141.62863338],
[-139.88757229, -300.99150251, -139.32120466]])
lengths = None
def _score_log(self, X, lengths=None, *, compute_posteriors):
"""
Compute the log probability under the model, as well as posteriors if
*compute_posteriors* is True (otherwise, an empty array is returned
for the latter).
"""
log_prob = 0
sub_posteriors = [np.empty((0, self.n_components))]
for sub_X in _utils.split_X_lengths(X, lengths):
log_frameprob = self._compute_log_likelihood(sub_X)
> log_probij, fwdlattice = _hmmc.forward_log(
self.startprob_, self.transmat_, log_frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:257: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
_______________ TestGaussianHMMWithFullCovars.test_fit[scaling] ________________
self = <hmmlearn.tests.test_gaussian_hmm.TestGaussianHMMWithFullCovars object at 0xffff94f01590>
implementation = 'scaling', params = 'stmc', n_iter = 5, kwargs = {}
h = GaussianHMM(covariance_type='full', implementation='scaling', n_components=3)
lengths = [10, 10, 10, 10, 10, 10, ...]
X = array([[-180.58095266, 76.77820758, 260.82999791],
[-179.69582924, 80.01947451, 260.89843931],
[-1...80692975],
[-239.96300261, 321.60437243, -119.98216274],
[-243.00912572, 319.79523591, -120.22074218]])
_state_sequence = array([0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 2, 2, 2, 1, 1, 1, 1, 2, 1,
1, 2, 2, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0,...0,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 1,
1, 0, 2, 0, 1, 1, 1, 1, 1, 0, 1, 1])
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_fit(self, implementation, params='stmc', n_iter=5, **kwargs):
h = hmm.GaussianHMM(self.n_components, self.covariance_type,
implementation=implementation)
h.startprob_ = self.startprob
h.transmat_ = normalized(
self.transmat + np.diag(self.prng.rand(self.n_components)), 1)
h.means_ = 20 * self.means
h.covars_ = self.covars
lengths = [10] * 10
X, _state_sequence = h.sample(sum(lengths), random_state=self.prng)
# Mess up the parameters and see if we can re-learn them.
# TODO: change the params and uncomment the check
> h.fit(X, lengths=lengths)
hmmlearn/tests/test_gaussian_hmm.py:89:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GaussianHMM(covariance_type='full', implementation='scaling', n_components=3)
X = array([[-180.58095266, 76.77820758, 260.82999791],
[-179.69582924, 80.01947451, 260.89843931],
[-1...69639643],
[-180.66903787, 80.96928136, 260.49492216],
[-179.21107272, 83.36164545, 259.72145566]])
def _fit_scaling(self, X):
frameprob = self._compute_likelihood(X)
> log_prob, fwdlattice, scaling_factors = _hmmc.forward_scaling(
self.startprob_, self.transmat_, frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:855: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
------------------------------ Captured log call -------------------------------
WARNING hmmlearn.base:base.py:514 Even though the 'startprob_' attribute is set, it will be overwritten during initialization because 'init_params' contains 's'
WARNING hmmlearn.base:base.py:514 Even though the 'transmat_' attribute is set, it will be overwritten during initialization because 'init_params' contains 't'
WARNING hmmlearn.base:base.py:514 Even though the 'means_' attribute is set, it will be overwritten during initialization because 'init_params' contains 'm'
WARNING hmmlearn.base:base.py:514 Even though the 'covars_' attribute is set, it will be overwritten during initialization because 'init_params' contains 'c'
_________________ TestGaussianHMMWithFullCovars.test_fit[log] __________________
self = <hmmlearn.tests.test_gaussian_hmm.TestGaussianHMMWithFullCovars object at 0xffff954a7950>
implementation = 'log', params = 'stmc', n_iter = 5, kwargs = {}
h = GaussianHMM(covariance_type='full', n_components=3)
lengths = [10, 10, 10, 10, 10, 10, ...]
X = array([[-180.58095266, 76.77820758, 260.82999791],
[-179.69582924, 80.01947451, 260.89843931],
[-1...80692975],
[-239.96300261, 321.60437243, -119.98216274],
[-243.00912572, 319.79523591, -120.22074218]])
_state_sequence = array([0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 2, 2, 2, 1, 1, 1, 1, 2, 1,
1, 2, 2, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0,...0,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 1,
1, 0, 2, 0, 1, 1, 1, 1, 1, 0, 1, 1])
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_fit(self, implementation, params='stmc', n_iter=5, **kwargs):
h = hmm.GaussianHMM(self.n_components, self.covariance_type,
implementation=implementation)
h.startprob_ = self.startprob
h.transmat_ = normalized(
self.transmat + np.diag(self.prng.rand(self.n_components)), 1)
h.means_ = 20 * self.means
h.covars_ = self.covars
lengths = [10] * 10
X, _state_sequence = h.sample(sum(lengths), random_state=self.prng)
# Mess up the parameters and see if we can re-learn them.
# TODO: change the params and uncomment the check
> h.fit(X, lengths=lengths)
hmmlearn/tests/test_gaussian_hmm.py:89:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GaussianHMM(covariance_type='full', n_components=3)
X = array([[-180.58095266, 76.77820758, 260.82999791],
[-179.69582924, 80.01947451, 260.89843931],
[-1...69639643],
[-180.66903787, 80.96928136, 260.49492216],
[-179.21107272, 83.36164545, 259.72145566]])
def _fit_log(self, X):
log_frameprob = self._compute_log_likelihood(X)
> log_prob, fwdlattice = _hmmc.forward_log(
self.startprob_, self.transmat_, log_frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:864: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
------------------------------ Captured log call -------------------------------
WARNING hmmlearn.base:base.py:514 Even though the 'startprob_' attribute is set, it will be overwritten during initialization because 'init_params' contains 's'
WARNING hmmlearn.base:base.py:514 Even though the 'transmat_' attribute is set, it will be overwritten during initialization because 'init_params' contains 't'
WARNING hmmlearn.base:base.py:514 Even though the 'means_' attribute is set, it will be overwritten during initialization because 'init_params' contains 'm'
WARNING hmmlearn.base:base.py:514 Even though the 'covars_' attribute is set, it will be overwritten during initialization because 'init_params' contains 'c'
____________ TestGaussianHMMWithFullCovars.test_criterion[scaling] _____________
self = <hmmlearn.tests.test_gaussian_hmm.TestGaussianHMMWithFullCovars object at 0xffff954a7a50>
implementation = 'scaling'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_criterion(self, implementation):
random_state = check_random_state(42)
m1 = hmm.GaussianHMM(self.n_components, init_params="",
covariance_type=self.covariance_type)
m1.startprob_ = self.startprob
m1.transmat_ = self.transmat
m1.means_ = self.means * 10
m1.covars_ = self.covars
X, _ = m1.sample(2000, random_state=random_state)
aic = []
bic = []
ns = [2, 3, 4]
for n in ns:
h = hmm.GaussianHMM(n, self.covariance_type, n_iter=500,
random_state=random_state, implementation=implementation)
> h.fit(X)
hmmlearn/tests/test_gaussian_hmm.py:110:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GaussianHMM(covariance_type='full', implementation='scaling', n_components=2,
n_iter=500, random_state=RandomState(MT19937) at 0xFFFF94885840)
X = array([[ -89.29523181, 41.84500715, 129.19454811],
[-121.84420946, 159.33718199, -59.47865859],
[-1...24221311],
[-119.33776175, 161.48568424, -60.76453046],
[ -89.46135014, 39.43571927, 130.17918399]])
def _fit_scaling(self, X):
frameprob = self._compute_likelihood(X)
> log_prob, fwdlattice, scaling_factors = _hmmc.forward_scaling(
self.startprob_, self.transmat_, frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:855: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
______________ TestGaussianHMMWithFullCovars.test_criterion[log] _______________
self = <hmmlearn.tests.test_gaussian_hmm.TestGaussianHMMWithFullCovars object at 0xffff94e99e50>
implementation = 'log'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_criterion(self, implementation):
random_state = check_random_state(42)
m1 = hmm.GaussianHMM(self.n_components, init_params="",
covariance_type=self.covariance_type)
m1.startprob_ = self.startprob
m1.transmat_ = self.transmat
m1.means_ = self.means * 10
m1.covars_ = self.covars
X, _ = m1.sample(2000, random_state=random_state)
aic = []
bic = []
ns = [2, 3, 4]
for n in ns:
h = hmm.GaussianHMM(n, self.covariance_type, n_iter=500,
random_state=random_state, implementation=implementation)
> h.fit(X)
hmmlearn/tests/test_gaussian_hmm.py:110:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GaussianHMM(covariance_type='full', n_components=2, n_iter=500,
random_state=RandomState(MT19937) at 0xFFFF94AA7240)
X = array([[ -89.29523181, 41.84500715, 129.19454811],
[-121.84420946, 159.33718199, -59.47865859],
[-1...24221311],
[-119.33776175, 161.48568424, -60.76453046],
[ -89.46135014, 39.43571927, 130.17918399]])
def _fit_log(self, X):
log_frameprob = self._compute_log_likelihood(X)
> log_prob, fwdlattice = _hmmc.forward_log(
self.startprob_, self.transmat_, log_frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:864: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
______ TestGaussianHMMWithFullCovars.test_fit_ignored_init_warns[scaling] ______
self = <hmmlearn.tests.test_gaussian_hmm.TestGaussianHMMWithFullCovars object at 0xffff94e99f40>
implementation = 'scaling'
caplog = <_pytest.logging.LogCaptureFixture object at 0xffff94a2f3f0>
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_fit_ignored_init_warns(self, implementation, caplog):
# This test occasionally will be flaky in learning the model.
# What is important here, is that the expected log message is produced
# We can test convergence properties elsewhere.
h = hmm.GaussianHMM(self.n_components, self.covariance_type,
implementation=implementation)
h.startprob_ = self.startprob
> h.fit(self.prng.randn(100, self.n_components))
hmmlearn/tests/test_gaussian_hmm.py:128:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GaussianHMM(covariance_type='full', implementation='scaling', n_components=3)
X = array([[ 1.08237635e+00, -1.78571782e+00, -2.72336983e-01],
[-2.90368389e-01, 1.65614717e+00, -1.23456870e+00]... [-8.50841186e-02, -3.43870735e-01, -6.18822776e-01],
[ 3.90241258e-01, -1.85025630e+00, -9.02633482e-01]])
def _fit_scaling(self, X):
frameprob = self._compute_likelihood(X)
> log_prob, fwdlattice, scaling_factors = _hmmc.forward_scaling(
self.startprob_, self.transmat_, frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:855: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
------------------------------ Captured log call -------------------------------
WARNING hmmlearn.base:base.py:514 Even though the 'startprob_' attribute is set, it will be overwritten during initialization because 'init_params' contains 's'
________ TestGaussianHMMWithFullCovars.test_fit_ignored_init_warns[log] ________
self = <hmmlearn.tests.test_gaussian_hmm.TestGaussianHMMWithFullCovars object at 0xffff94f6d7f0>
implementation = 'log'
caplog = <_pytest.logging.LogCaptureFixture object at 0xffff94a2f310>
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_fit_ignored_init_warns(self, implementation, caplog):
# This test occasionally will be flaky in learning the model.
# What is important here, is that the expected log message is produced
# We can test convergence properties elsewhere.
h = hmm.GaussianHMM(self.n_components, self.covariance_type,
implementation=implementation)
h.startprob_ = self.startprob
> h.fit(self.prng.randn(100, self.n_components))
hmmlearn/tests/test_gaussian_hmm.py:128:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GaussianHMM(covariance_type='full', n_components=3)
X = array([[ 1.08237635e+00, -1.78571782e+00, -2.72336983e-01],
[-2.90368389e-01, 1.65614717e+00, -1.23456870e+00]... [-8.50841186e-02, -3.43870735e-01, -6.18822776e-01],
[ 3.90241258e-01, -1.85025630e+00, -9.02633482e-01]])
def _fit_log(self, X):
log_frameprob = self._compute_log_likelihood(X)
> log_prob, fwdlattice = _hmmc.forward_log(
self.startprob_, self.transmat_, log_frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:864: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
------------------------------ Captured log call -------------------------------
WARNING hmmlearn.base:base.py:514 Even though the 'startprob_' attribute is set, it will be overwritten during initialization because 'init_params' contains 's'
_ TestGaussianHMMWithFullCovars.test_fit_sequences_of_different_length[scaling] _
self = <hmmlearn.tests.test_gaussian_hmm.TestGaussianHMMWithFullCovars object at 0xffff94b16e10>
implementation = 'scaling'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_fit_sequences_of_different_length(self, implementation):
lengths = [3, 4, 5]
X = self.prng.rand(sum(lengths), self.n_features)
h = hmm.GaussianHMM(self.n_components, self.covariance_type,
implementation=implementation)
# This shouldn't raise
# ValueError: setting an array element with a sequence.
> h.fit(X, lengths=lengths)
hmmlearn/tests/test_gaussian_hmm.py:157:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GaussianHMM(covariance_type='full', implementation='scaling', n_components=3)
X = array([[0.35625033, 0.58713093, 0.14947134],
[0.1712386 , 0.39716452, 0.63795156],
[0.37251995, 0.00240676, 0.54881636]])
def _fit_scaling(self, X):
frameprob = self._compute_likelihood(X)
> log_prob, fwdlattice, scaling_factors = _hmmc.forward_scaling(
self.startprob_, self.transmat_, frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:855: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
__ TestGaussianHMMWithFullCovars.test_fit_sequences_of_different_length[log] ___
self = <hmmlearn.tests.test_gaussian_hmm.TestGaussianHMMWithFullCovars object at 0xffff94b16f90>
implementation = 'log'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_fit_sequences_of_different_length(self, implementation):
lengths = [3, 4, 5]
X = self.prng.rand(sum(lengths), self.n_features)
h = hmm.GaussianHMM(self.n_components, self.covariance_type,
implementation=implementation)
# This shouldn't raise
# ValueError: setting an array element with a sequence.
> h.fit(X, lengths=lengths)
hmmlearn/tests/test_gaussian_hmm.py:157:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GaussianHMM(covariance_type='full', n_components=3)
X = array([[0.35625033, 0.58713093, 0.14947134],
[0.1712386 , 0.39716452, 0.63795156],
[0.37251995, 0.00240676, 0.54881636]])
def _fit_log(self, X):
log_frameprob = self._compute_log_likelihood(X)
> log_prob, fwdlattice = _hmmc.forward_log(
self.startprob_, self.transmat_, log_frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:864: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
____ TestGaussianHMMWithFullCovars.test_fit_with_length_one_signal[scaling] ____
self = <hmmlearn.tests.test_gaussian_hmm.TestGaussianHMMWithFullCovars object at 0xffff94b1e2b0>
implementation = 'scaling'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_fit_with_length_one_signal(self, implementation):
lengths = [10, 8, 1]
X = self.prng.rand(sum(lengths), self.n_features)
h = hmm.GaussianHMM(self.n_components, self.covariance_type,
implementation=implementation)
# This shouldn't raise
# ValueError: zero-size array to reduction operation maximum which
# has no identity
> h.fit(X, lengths=lengths)
hmmlearn/tests/test_gaussian_hmm.py:169:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GaussianHMM(covariance_type='full', implementation='scaling', n_components=3)
X = array([[0.35625033, 0.58713093, 0.14947134],
[0.1712386 , 0.39716452, 0.63795156],
[0.37251995, 0.002406...88, 0.35095822, 0.70533161],
[0.82070374, 0.134563 , 0.60472616],
[0.28314828, 0.50640782, 0.03846043]])
def _fit_scaling(self, X):
frameprob = self._compute_likelihood(X)
> log_prob, fwdlattice, scaling_factors = _hmmc.forward_scaling(
self.startprob_, self.transmat_, frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:855: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
______ TestGaussianHMMWithFullCovars.test_fit_with_length_one_signal[log] ______
self = <hmmlearn.tests.test_gaussian_hmm.TestGaussianHMMWithFullCovars object at 0xffff94b1e360>
implementation = 'log'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_fit_with_length_one_signal(self, implementation):
lengths = [10, 8, 1]
X = self.prng.rand(sum(lengths), self.n_features)
h = hmm.GaussianHMM(self.n_components, self.covariance_type,
implementation=implementation)
# This shouldn't raise
# ValueError: zero-size array to reduction operation maximum which
# has no identity
> h.fit(X, lengths=lengths)
hmmlearn/tests/test_gaussian_hmm.py:169:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GaussianHMM(covariance_type='full', n_components=3)
X = array([[0.35625033, 0.58713093, 0.14947134],
[0.1712386 , 0.39716452, 0.63795156],
[0.37251995, 0.002406...88, 0.35095822, 0.70533161],
[0.82070374, 0.134563 , 0.60472616],
[0.28314828, 0.50640782, 0.03846043]])
def _fit_log(self, X):
log_frameprob = self._compute_log_likelihood(X)
> log_prob, fwdlattice = _hmmc.forward_log(
self.startprob_, self.transmat_, log_frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:864: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
________ TestGaussianHMMWithFullCovars.test_fit_zero_variance[scaling] _________
self = <hmmlearn.tests.test_gaussian_hmm.TestGaussianHMMWithFullCovars object at 0xffff94f7eb70>
implementation = 'scaling'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_fit_zero_variance(self, implementation):
# Example from issue #2 on GitHub.
X = np.asarray([
[7.15000000e+02, 5.85000000e+02, 0.00000000e+00, 0.00000000e+00],
[7.15000000e+02, 5.20000000e+02, 1.04705811e+00, -6.03696289e+01],
[7.15000000e+02, 4.55000000e+02, 7.20886230e-01, -5.27055664e+01],
[7.15000000e+02, 3.90000000e+02, -4.57946777e-01, -7.80605469e+01],
[7.15000000e+02, 3.25000000e+02, -6.43127441e+00, -5.59954834e+01],
[7.15000000e+02, 2.60000000e+02, -2.90063477e+00, -7.80220947e+01],
[7.15000000e+02, 1.95000000e+02, 8.45532227e+00, -7.03294373e+01],
[7.15000000e+02, 1.30000000e+02, 4.09387207e+00, -5.83621216e+01],
[7.15000000e+02, 6.50000000e+01, -1.21667480e+00, -4.48131409e+01]
])
h = hmm.GaussianHMM(3, self.covariance_type,
implementation=implementation)
> h.fit(X)
hmmlearn/tests/test_gaussian_hmm.py:187:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GaussianHMM(covariance_type='full', implementation='scaling', n_components=3)
X = array([[ 7.15000000e+02, 5.85000000e+02, 0.00000000e+00,
0.00000000e+00],
[ 7.15000000e+02, 5.20000...07e+00,
-5.83621216e+01],
[ 7.15000000e+02, 6.50000000e+01, -1.21667480e+00,
-4.48131409e+01]])
def _fit_scaling(self, X):
frameprob = self._compute_likelihood(X)
> log_prob, fwdlattice, scaling_factors = _hmmc.forward_scaling(
self.startprob_, self.transmat_, frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:855: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
------------------------------ Captured log call -------------------------------
WARNING hmmlearn.base:base.py:922 Fitting a model with 50 free scalar parameters with only 36 data points will result in a degenerate solution.
__________ TestGaussianHMMWithFullCovars.test_fit_zero_variance[log] ___________
self = <hmmlearn.tests.test_gaussian_hmm.TestGaussianHMMWithFullCovars object at 0xffff94f7ec10>
implementation = 'log'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_fit_zero_variance(self, implementation):
# Example from issue #2 on GitHub.
X = np.asarray([
[7.15000000e+02, 5.85000000e+02, 0.00000000e+00, 0.00000000e+00],
[7.15000000e+02, 5.20000000e+02, 1.04705811e+00, -6.03696289e+01],
[7.15000000e+02, 4.55000000e+02, 7.20886230e-01, -5.27055664e+01],
[7.15000000e+02, 3.90000000e+02, -4.57946777e-01, -7.80605469e+01],
[7.15000000e+02, 3.25000000e+02, -6.43127441e+00, -5.59954834e+01],
[7.15000000e+02, 2.60000000e+02, -2.90063477e+00, -7.80220947e+01],
[7.15000000e+02, 1.95000000e+02, 8.45532227e+00, -7.03294373e+01],
[7.15000000e+02, 1.30000000e+02, 4.09387207e+00, -5.83621216e+01],
[7.15000000e+02, 6.50000000e+01, -1.21667480e+00, -4.48131409e+01]
])
h = hmm.GaussianHMM(3, self.covariance_type,
implementation=implementation)
> h.fit(X)
hmmlearn/tests/test_gaussian_hmm.py:187:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GaussianHMM(covariance_type='full', n_components=3)
X = array([[ 7.15000000e+02, 5.85000000e+02, 0.00000000e+00,
0.00000000e+00],
[ 7.15000000e+02, 5.20000...07e+00,
-5.83621216e+01],
[ 7.15000000e+02, 6.50000000e+01, -1.21667480e+00,
-4.48131409e+01]])
def _fit_log(self, X):
log_frameprob = self._compute_log_likelihood(X)
> log_prob, fwdlattice = _hmmc.forward_log(
self.startprob_, self.transmat_, log_frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:864: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
------------------------------ Captured log call -------------------------------
WARNING hmmlearn.base:base.py:922 Fitting a model with 50 free scalar parameters with only 36 data points will result in a degenerate solution.
_________ TestGaussianHMMWithFullCovars.test_fit_with_priors[scaling] __________
self = <hmmlearn.tests.test_gaussian_hmm.TestGaussianHMMWithFullCovars object at 0xffff94ec1910>
implementation = 'scaling', init_params = 'mc', params = 'stmc', n_iter = 20
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_fit_with_priors(self, implementation, init_params='mc',
params='stmc', n_iter=20):
# We have a few options to make this a robust test, such as
# a. increase the amount of training data to ensure convergence
# b. Only learn some of the parameters (simplify the problem)
# c. Increase the number of iterations
#
# (c) seems to not affect the ci/cd time too much.
startprob_prior = 10 * self.startprob + 2.0
transmat_prior = 10 * self.transmat + 2.0
means_prior = self.means
means_weight = 2.0
covars_weight = 2.0
if self.covariance_type in ('full', 'tied'):
covars_weight += self.n_features
covars_prior = self.covars
h = hmm.GaussianHMM(self.n_components, self.covariance_type,
implementation=implementation)
h.startprob_ = self.startprob
h.startprob_prior = startprob_prior
h.transmat_ = normalized(
self.transmat + np.diag(self.prng.rand(self.n_components)), 1)
h.transmat_prior = transmat_prior
h.means_ = 20 * self.means
h.means_prior = means_prior
h.means_weight = means_weight
h.covars_ = self.covars
h.covars_prior = covars_prior
h.covars_weight = covars_weight
lengths = [200] * 10
X, _state_sequence = h.sample(sum(lengths), random_state=self.prng)
# Re-initialize the parameters and check that we can converge to
# the original parameter values.
h_learn = hmm.GaussianHMM(self.n_components, self.covariance_type,
init_params=init_params, params=params,
implementation=implementation,)
# don't use random parameters for testing
init = 1. / h_learn.n_components
h_learn.startprob_ = np.full(h_learn.n_components, init)
h_learn.transmat_ = \
np.full((h_learn.n_components, h_learn.n_components), init)
h_learn.n_iter = 0
h_learn.fit(X, lengths=lengths)
> assert_log_likelihood_increasing(h_learn, X, lengths, n_iter)
hmmlearn/tests/test_gaussian_hmm.py:237:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/tests/__init__.py:46: in assert_log_likelihood_increasing
h.fit(X, lengths=lengths)
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GaussianHMM(covariance_type='full', implementation='scaling', init_params='',
n_components=3, n_iter=1)
X = array([[-180.58095266, 76.77820758, 260.82999791],
[-179.69582924, 80.01947451, 260.89843931],
[-1...95169853],
[-239.53275248, 319.24695192, -120.48672946],
[-242.40666906, 318.95372592, -121.39814967]])
def _fit_scaling(self, X):
frameprob = self._compute_likelihood(X)
> log_prob, fwdlattice, scaling_factors = _hmmc.forward_scaling(
self.startprob_, self.transmat_, frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:855: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
___________ TestGaussianHMMWithFullCovars.test_fit_with_priors[log] ____________
self = <hmmlearn.tests.test_gaussian_hmm.TestGaussianHMMWithFullCovars object at 0xffff94a7a150>
implementation = 'log', init_params = 'mc', params = 'stmc', n_iter = 20
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_fit_with_priors(self, implementation, init_params='mc',
params='stmc', n_iter=20):
# We have a few options to make this a robust test, such as
# a. increase the amount of training data to ensure convergence
# b. Only learn some of the parameters (simplify the problem)
# c. Increase the number of iterations
#
# (c) seems to not affect the ci/cd time too much.
startprob_prior = 10 * self.startprob + 2.0
transmat_prior = 10 * self.transmat + 2.0
means_prior = self.means
means_weight = 2.0
covars_weight = 2.0
if self.covariance_type in ('full', 'tied'):
covars_weight += self.n_features
covars_prior = self.covars
h = hmm.GaussianHMM(self.n_components, self.covariance_type,
implementation=implementation)
h.startprob_ = self.startprob
h.startprob_prior = startprob_prior
h.transmat_ = normalized(
self.transmat + np.diag(self.prng.rand(self.n_components)), 1)
h.transmat_prior = transmat_prior
h.means_ = 20 * self.means
h.means_prior = means_prior
h.means_weight = means_weight
h.covars_ = self.covars
h.covars_prior = covars_prior
h.covars_weight = covars_weight
lengths = [200] * 10
X, _state_sequence = h.sample(sum(lengths), random_state=self.prng)
# Re-initialize the parameters and check that we can converge to
# the original parameter values.
h_learn = hmm.GaussianHMM(self.n_components, self.covariance_type,
init_params=init_params, params=params,
implementation=implementation,)
# don't use random parameters for testing
init = 1. / h_learn.n_components
h_learn.startprob_ = np.full(h_learn.n_components, init)
h_learn.transmat_ = \
np.full((h_learn.n_components, h_learn.n_components), init)
h_learn.n_iter = 0
h_learn.fit(X, lengths=lengths)
> assert_log_likelihood_increasing(h_learn, X, lengths, n_iter)
hmmlearn/tests/test_gaussian_hmm.py:237:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/tests/__init__.py:46: in assert_log_likelihood_increasing
h.fit(X, lengths=lengths)
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GaussianHMM(covariance_type='full', init_params='', n_components=3, n_iter=1)
X = array([[-180.58095266, 76.77820758, 260.82999791],
[-179.69582924, 80.01947451, 260.89843931],
[-1...95169853],
[-239.53275248, 319.24695192, -120.48672946],
[-242.40666906, 318.95372592, -121.39814967]])
def _fit_log(self, X):
log_frameprob = self._compute_log_likelihood(X)
> log_prob, fwdlattice = _hmmc.forward_log(
self.startprob_, self.transmat_, log_frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:864: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
_ test_gmmhmm_multi_sequence_fit_invariant_to_sequence_ordering[scaling-diag] __
covariance_type = 'diag', implementation = 'scaling', init_params = 'mcw'
verbose = False
@pytest.mark.parametrize("covariance_type",
["diag", "spherical", "tied", "full"])
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_gmmhmm_multi_sequence_fit_invariant_to_sequence_ordering(
covariance_type, implementation, init_params='mcw', verbose=False
):
"""
Sanity check GMM-HMM fit behaviour when run on multiple sequences
aka multiple frames.
Training data consumed during GMM-HMM fit is packed into a single
array X containing one or more sequences. In the case where
there are two or more input sequences, the ordering that the
sequences are packed into X should not influence the results
of the fit. Major differences in convergence during EM
iterations by merely permuting sequence order in the input
indicates a likely defect in the fit implementation.
Note: the ordering of samples inside a given sequence
is very meaningful, permuting the order of samples would
destroy the the state transition structure in the input data.
See issue 410 on github:
https://github.com/hmmlearn/hmmlearn/issues/410
"""
sequence_data = EXAMPLE_SEQUENCES_ISSUE_410_PRUNED
scores = []
for p in make_permutations(sequence_data):
sequences = sequence_data[p]
X = np.concatenate(sequences)
lengths = [len(seq) for seq in sequences]
model = hmm.GMMHMM(
n_components=2,
n_mix=2,
n_iter=100,
covariance_type=covariance_type,
verbose=verbose,
init_params=init_params,
random_state=1234,
implementation=implementation
)
# don't use random parameters for testing
init = 1. / model.n_components
model.startprob_ = np.full(model.n_components, init)
model.transmat_ = \
np.full((model.n_components, model.n_components), init)
model.monitor_ = StrictMonitor(
model.monitor_.tol,
model.monitor_.n_iter,
model.monitor_.verbose,
)
> model.fit(X, lengths)
hmmlearn/tests/test_gmm_hmm_multisequence.py:280:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GMMHMM(covars_prior=array([[[-1.5, -1.5, -1.5, -1.5],
[-1.5, -1.5, -1.5, -1.5]],
[[-1.5, -1.5, -1.5, -...n_components=2,
n_iter=100, n_mix=2, random_state=1234,
weights_prior=array([[1., 1.],
[1., 1.]]))
X = array([[0.00992058, 0.44151747, 0.5395124 , 0.40644765],
[0.00962487, 0.45613006, 0.52375835, 0.3899082 ],
...00656536, 0.39309287, 0.60035396, 0.41596898],
[0.00693208, 0.37821782, 0.59813255, 0.4394344 ]], dtype=float32)
def _fit_scaling(self, X):
frameprob = self._compute_likelihood(X)
> log_prob, fwdlattice, scaling_factors = _hmmc.forward_scaling(
self.startprob_, self.transmat_, frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:855: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
_ test_gmmhmm_multi_sequence_fit_invariant_to_sequence_ordering[scaling-spherical] _
covariance_type = 'spherical', implementation = 'scaling', init_params = 'mcw'
verbose = False
@pytest.mark.parametrize("covariance_type",
["diag", "spherical", "tied", "full"])
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_gmmhmm_multi_sequence_fit_invariant_to_sequence_ordering(
covariance_type, implementation, init_params='mcw', verbose=False
):
"""
Sanity check GMM-HMM fit behaviour when run on multiple sequences
aka multiple frames.
Training data consumed during GMM-HMM fit is packed into a single
array X containing one or more sequences. In the case where
there are two or more input sequences, the ordering that the
sequences are packed into X should not influence the results
of the fit. Major differences in convergence during EM
iterations by merely permuting sequence order in the input
indicates a likely defect in the fit implementation.
Note: the ordering of samples inside a given sequence
is very meaningful, permuting the order of samples would
destroy the the state transition structure in the input data.
See issue 410 on github:
https://github.com/hmmlearn/hmmlearn/issues/410
"""
sequence_data = EXAMPLE_SEQUENCES_ISSUE_410_PRUNED
scores = []
for p in make_permutations(sequence_data):
sequences = sequence_data[p]
X = np.concatenate(sequences)
lengths = [len(seq) for seq in sequences]
model = hmm.GMMHMM(
n_components=2,
n_mix=2,
n_iter=100,
covariance_type=covariance_type,
verbose=verbose,
init_params=init_params,
random_state=1234,
implementation=implementation
)
# don't use random parameters for testing
init = 1. / model.n_components
model.startprob_ = np.full(model.n_components, init)
model.transmat_ = \
np.full((model.n_components, model.n_components), init)
model.monitor_ = StrictMonitor(
model.monitor_.tol,
model.monitor_.n_iter,
model.monitor_.verbose,
)
> model.fit(X, lengths)
hmmlearn/tests/test_gmm_hmm_multisequence.py:280:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GMMHMM(covariance_type='spherical',
covars_prior=array([[-2., -2.],
[-2., -2.]]),
covars_weight=a...n_components=2,
n_iter=100, n_mix=2, random_state=1234,
weights_prior=array([[1., 1.],
[1., 1.]]))
X = array([[0.00992058, 0.44151747, 0.5395124 , 0.40644765],
[0.00962487, 0.45613006, 0.52375835, 0.3899082 ],
...00656536, 0.39309287, 0.60035396, 0.41596898],
[0.00693208, 0.37821782, 0.59813255, 0.4394344 ]], dtype=float32)
def _fit_scaling(self, X):
frameprob = self._compute_likelihood(X)
> log_prob, fwdlattice, scaling_factors = _hmmc.forward_scaling(
self.startprob_, self.transmat_, frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:855: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
_ test_gmmhmm_multi_sequence_fit_invariant_to_sequence_ordering[scaling-tied] __
covariance_type = 'tied', implementation = 'scaling', init_params = 'mcw'
verbose = False
@pytest.mark.parametrize("covariance_type",
["diag", "spherical", "tied", "full"])
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_gmmhmm_multi_sequence_fit_invariant_to_sequence_ordering(
covariance_type, implementation, init_params='mcw', verbose=False
):
"""
Sanity check GMM-HMM fit behaviour when run on multiple sequences
aka multiple frames.
Training data consumed during GMM-HMM fit is packed into a single
array X containing one or more sequences. In the case where
there are two or more input sequences, the ordering that the
sequences are packed into X should not influence the results
of the fit. Major differences in convergence during EM
iterations by merely permuting sequence order in the input
indicates a likely defect in the fit implementation.
Note: the ordering of samples inside a given sequence
is very meaningful, permuting the order of samples would
destroy the the state transition structure in the input data.
See issue 410 on github:
https://github.com/hmmlearn/hmmlearn/issues/410
"""
sequence_data = EXAMPLE_SEQUENCES_ISSUE_410_PRUNED
scores = []
for p in make_permutations(sequence_data):
sequences = sequence_data[p]
X = np.concatenate(sequences)
lengths = [len(seq) for seq in sequences]
model = hmm.GMMHMM(
n_components=2,
n_mix=2,
n_iter=100,
covariance_type=covariance_type,
verbose=verbose,
init_params=init_params,
random_state=1234,
implementation=implementation
)
# don't use random parameters for testing
init = 1. / model.n_components
model.startprob_ = np.full(model.n_components, init)
model.transmat_ = \
np.full((model.n_components, model.n_components), init)
model.monitor_ = StrictMonitor(
model.monitor_.tol,
model.monitor_.n_iter,
model.monitor_.verbose,
)
> model.fit(X, lengths)
hmmlearn/tests/test_gmm_hmm_multisequence.py:280:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GMMHMM(covariance_type='tied',
covars_prior=array([[[0., 0., 0., 0.],
[0., 0., 0., 0.],
[0., 0....n_components=2,
n_iter=100, n_mix=2, random_state=1234,
weights_prior=array([[1., 1.],
[1., 1.]]))
X = array([[0.00992058, 0.44151747, 0.5395124 , 0.40644765],
[0.00962487, 0.45613006, 0.52375835, 0.3899082 ],
...00656536, 0.39309287, 0.60035396, 0.41596898],
[0.00693208, 0.37821782, 0.59813255, 0.4394344 ]], dtype=float32)
def _fit_scaling(self, X):
frameprob = self._compute_likelihood(X)
> log_prob, fwdlattice, scaling_factors = _hmmc.forward_scaling(
self.startprob_, self.transmat_, frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:855: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
_ test_gmmhmm_multi_sequence_fit_invariant_to_sequence_ordering[scaling-full] __
covariance_type = 'full', implementation = 'scaling', init_params = 'mcw'
verbose = False
@pytest.mark.parametrize("covariance_type",
["diag", "spherical", "tied", "full"])
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_gmmhmm_multi_sequence_fit_invariant_to_sequence_ordering(
covariance_type, implementation, init_params='mcw', verbose=False
):
"""
Sanity check GMM-HMM fit behaviour when run on multiple sequences
aka multiple frames.
Training data consumed during GMM-HMM fit is packed into a single
array X containing one or more sequences. In the case where
there are two or more input sequences, the ordering that the
sequences are packed into X should not influence the results
of the fit. Major differences in convergence during EM
iterations by merely permuting sequence order in the input
indicates a likely defect in the fit implementation.
Note: the ordering of samples inside a given sequence
is very meaningful, permuting the order of samples would
destroy the the state transition structure in the input data.
See issue 410 on github:
https://github.com/hmmlearn/hmmlearn/issues/410
"""
sequence_data = EXAMPLE_SEQUENCES_ISSUE_410_PRUNED
scores = []
for p in make_permutations(sequence_data):
sequences = sequence_data[p]
X = np.concatenate(sequences)
lengths = [len(seq) for seq in sequences]
model = hmm.GMMHMM(
n_components=2,
n_mix=2,
n_iter=100,
covariance_type=covariance_type,
verbose=verbose,
init_params=init_params,
random_state=1234,
implementation=implementation
)
# don't use random parameters for testing
init = 1. / model.n_components
model.startprob_ = np.full(model.n_components, init)
model.transmat_ = \
np.full((model.n_components, model.n_components), init)
model.monitor_ = StrictMonitor(
model.monitor_.tol,
model.monitor_.n_iter,
model.monitor_.verbose,
)
> model.fit(X, lengths)
hmmlearn/tests/test_gmm_hmm_multisequence.py:280:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GMMHMM(covariance_type='full',
covars_prior=array([[[[0., 0., 0., 0.],
[0., 0., 0., 0.],
[0.,...n_components=2,
n_iter=100, n_mix=2, random_state=1234,
weights_prior=array([[1., 1.],
[1., 1.]]))
X = array([[0.00992058, 0.44151747, 0.5395124 , 0.40644765],
[0.00962487, 0.45613006, 0.52375835, 0.3899082 ],
...00656536, 0.39309287, 0.60035396, 0.41596898],
[0.00693208, 0.37821782, 0.59813255, 0.4394344 ]], dtype=float32)
def _fit_scaling(self, X):
frameprob = self._compute_likelihood(X)
> log_prob, fwdlattice, scaling_factors = _hmmc.forward_scaling(
self.startprob_, self.transmat_, frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:855: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
___ test_gmmhmm_multi_sequence_fit_invariant_to_sequence_ordering[log-diag] ____
covariance_type = 'diag', implementation = 'log', init_params = 'mcw'
verbose = False
@pytest.mark.parametrize("covariance_type",
["diag", "spherical", "tied", "full"])
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_gmmhmm_multi_sequence_fit_invariant_to_sequence_ordering(
covariance_type, implementation, init_params='mcw', verbose=False
):
"""
Sanity check GMM-HMM fit behaviour when run on multiple sequences
aka multiple frames.
Training data consumed during GMM-HMM fit is packed into a single
array X containing one or more sequences. In the case where
there are two or more input sequences, the ordering that the
sequences are packed into X should not influence the results
of the fit. Major differences in convergence during EM
iterations by merely permuting sequence order in the input
indicates a likely defect in the fit implementation.
Note: the ordering of samples inside a given sequence
is very meaningful, permuting the order of samples would
destroy the the state transition structure in the input data.
See issue 410 on github:
https://github.com/hmmlearn/hmmlearn/issues/410
"""
sequence_data = EXAMPLE_SEQUENCES_ISSUE_410_PRUNED
scores = []
for p in make_permutations(sequence_data):
sequences = sequence_data[p]
X = np.concatenate(sequences)
lengths = [len(seq) for seq in sequences]
model = hmm.GMMHMM(
n_components=2,
n_mix=2,
n_iter=100,
covariance_type=covariance_type,
verbose=verbose,
init_params=init_params,
random_state=1234,
implementation=implementation
)
# don't use random parameters for testing
init = 1. / model.n_components
model.startprob_ = np.full(model.n_components, init)
model.transmat_ = \
np.full((model.n_components, model.n_components), init)
model.monitor_ = StrictMonitor(
model.monitor_.tol,
model.monitor_.n_iter,
model.monitor_.verbose,
)
> model.fit(X, lengths)
hmmlearn/tests/test_gmm_hmm_multisequence.py:280:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GMMHMM(covars_prior=array([[[-1.5, -1.5, -1.5, -1.5],
[-1.5, -1.5, -1.5, -1.5]],
[[-1.5, -1.5, -1.5, -...n_components=2,
n_iter=100, n_mix=2, random_state=1234,
weights_prior=array([[1., 1.],
[1., 1.]]))
X = array([[0.00992058, 0.44151747, 0.5395124 , 0.40644765],
[0.00962487, 0.45613006, 0.52375835, 0.3899082 ],
...00656536, 0.39309287, 0.60035396, 0.41596898],
[0.00693208, 0.37821782, 0.59813255, 0.4394344 ]], dtype=float32)
def _fit_log(self, X):
log_frameprob = self._compute_log_likelihood(X)
> log_prob, fwdlattice = _hmmc.forward_log(
self.startprob_, self.transmat_, log_frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:864: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
_ test_gmmhmm_multi_sequence_fit_invariant_to_sequence_ordering[log-spherical] _
covariance_type = 'spherical', implementation = 'log', init_params = 'mcw'
verbose = False
@pytest.mark.parametrize("covariance_type",
["diag", "spherical", "tied", "full"])
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_gmmhmm_multi_sequence_fit_invariant_to_sequence_ordering(
covariance_type, implementation, init_params='mcw', verbose=False
):
"""
Sanity check GMM-HMM fit behaviour when run on multiple sequences
aka multiple frames.
Training data consumed during GMM-HMM fit is packed into a single
array X containing one or more sequences. In the case where
there are two or more input sequences, the ordering that the
sequences are packed into X should not influence the results
of the fit. Major differences in convergence during EM
iterations by merely permuting sequence order in the input
indicates a likely defect in the fit implementation.
Note: the ordering of samples inside a given sequence
is very meaningful, permuting the order of samples would
destroy the the state transition structure in the input data.
See issue 410 on github:
https://github.com/hmmlearn/hmmlearn/issues/410
"""
sequence_data = EXAMPLE_SEQUENCES_ISSUE_410_PRUNED
scores = []
for p in make_permutations(sequence_data):
sequences = sequence_data[p]
X = np.concatenate(sequences)
lengths = [len(seq) for seq in sequences]
model = hmm.GMMHMM(
n_components=2,
n_mix=2,
n_iter=100,
covariance_type=covariance_type,
verbose=verbose,
init_params=init_params,
random_state=1234,
implementation=implementation
)
# don't use random parameters for testing
init = 1. / model.n_components
model.startprob_ = np.full(model.n_components, init)
model.transmat_ = \
np.full((model.n_components, model.n_components), init)
model.monitor_ = StrictMonitor(
model.monitor_.tol,
model.monitor_.n_iter,
model.monitor_.verbose,
)
> model.fit(X, lengths)
hmmlearn/tests/test_gmm_hmm_multisequence.py:280:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GMMHMM(covariance_type='spherical',
covars_prior=array([[-2., -2.],
[-2., -2.]]),
covars_weight=a...n_components=2,
n_iter=100, n_mix=2, random_state=1234,
weights_prior=array([[1., 1.],
[1., 1.]]))
X = array([[0.00992058, 0.44151747, 0.5395124 , 0.40644765],
[0.00962487, 0.45613006, 0.52375835, 0.3899082 ],
...00656536, 0.39309287, 0.60035396, 0.41596898],
[0.00693208, 0.37821782, 0.59813255, 0.4394344 ]], dtype=float32)
def _fit_log(self, X):
log_frameprob = self._compute_log_likelihood(X)
> log_prob, fwdlattice = _hmmc.forward_log(
self.startprob_, self.transmat_, log_frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:864: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
___ test_gmmhmm_multi_sequence_fit_invariant_to_sequence_ordering[log-tied] ____
covariance_type = 'tied', implementation = 'log', init_params = 'mcw'
verbose = False
@pytest.mark.parametrize("covariance_type",
["diag", "spherical", "tied", "full"])
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_gmmhmm_multi_sequence_fit_invariant_to_sequence_ordering(
covariance_type, implementation, init_params='mcw', verbose=False
):
"""
Sanity check GMM-HMM fit behaviour when run on multiple sequences
aka multiple frames.
Training data consumed during GMM-HMM fit is packed into a single
array X containing one or more sequences. In the case where
there are two or more input sequences, the ordering that the
sequences are packed into X should not influence the results
of the fit. Major differences in convergence during EM
iterations by merely permuting sequence order in the input
indicates a likely defect in the fit implementation.
Note: the ordering of samples inside a given sequence
is very meaningful, permuting the order of samples would
destroy the the state transition structure in the input data.
See issue 410 on github:
https://github.com/hmmlearn/hmmlearn/issues/410
"""
sequence_data = EXAMPLE_SEQUENCES_ISSUE_410_PRUNED
scores = []
for p in make_permutations(sequence_data):
sequences = sequence_data[p]
X = np.concatenate(sequences)
lengths = [len(seq) for seq in sequences]
model = hmm.GMMHMM(
n_components=2,
n_mix=2,
n_iter=100,
covariance_type=covariance_type,
verbose=verbose,
init_params=init_params,
random_state=1234,
implementation=implementation
)
# don't use random parameters for testing
init = 1. / model.n_components
model.startprob_ = np.full(model.n_components, init)
model.transmat_ = \
np.full((model.n_components, model.n_components), init)
model.monitor_ = StrictMonitor(
model.monitor_.tol,
model.monitor_.n_iter,
model.monitor_.verbose,
)
> model.fit(X, lengths)
hmmlearn/tests/test_gmm_hmm_multisequence.py:280:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GMMHMM(covariance_type='tied',
covars_prior=array([[[0., 0., 0., 0.],
[0., 0., 0., 0.],
[0., 0....n_components=2,
n_iter=100, n_mix=2, random_state=1234,
weights_prior=array([[1., 1.],
[1., 1.]]))
X = array([[0.00992058, 0.44151747, 0.5395124 , 0.40644765],
[0.00962487, 0.45613006, 0.52375835, 0.3899082 ],
...00656536, 0.39309287, 0.60035396, 0.41596898],
[0.00693208, 0.37821782, 0.59813255, 0.4394344 ]], dtype=float32)
def _fit_log(self, X):
log_frameprob = self._compute_log_likelihood(X)
> log_prob, fwdlattice = _hmmc.forward_log(
self.startprob_, self.transmat_, log_frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:864: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
___ test_gmmhmm_multi_sequence_fit_invariant_to_sequence_ordering[log-full] ____
covariance_type = 'full', implementation = 'log', init_params = 'mcw'
verbose = False
@pytest.mark.parametrize("covariance_type",
["diag", "spherical", "tied", "full"])
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_gmmhmm_multi_sequence_fit_invariant_to_sequence_ordering(
covariance_type, implementation, init_params='mcw', verbose=False
):
"""
Sanity check GMM-HMM fit behaviour when run on multiple sequences
aka multiple frames.
Training data consumed during GMM-HMM fit is packed into a single
array X containing one or more sequences. In the case where
there are two or more input sequences, the ordering that the
sequences are packed into X should not influence the results
of the fit. Major differences in convergence during EM
iterations by merely permuting sequence order in the input
indicates a likely defect in the fit implementation.
Note: the ordering of samples inside a given sequence
is very meaningful, permuting the order of samples would
destroy the the state transition structure in the input data.
See issue 410 on github:
https://github.com/hmmlearn/hmmlearn/issues/410
"""
sequence_data = EXAMPLE_SEQUENCES_ISSUE_410_PRUNED
scores = []
for p in make_permutations(sequence_data):
sequences = sequence_data[p]
X = np.concatenate(sequences)
lengths = [len(seq) for seq in sequences]
model = hmm.GMMHMM(
n_components=2,
n_mix=2,
n_iter=100,
covariance_type=covariance_type,
verbose=verbose,
init_params=init_params,
random_state=1234,
implementation=implementation
)
# don't use random parameters for testing
init = 1. / model.n_components
model.startprob_ = np.full(model.n_components, init)
model.transmat_ = \
np.full((model.n_components, model.n_components), init)
model.monitor_ = StrictMonitor(
model.monitor_.tol,
model.monitor_.n_iter,
model.monitor_.verbose,
)
> model.fit(X, lengths)
hmmlearn/tests/test_gmm_hmm_multisequence.py:280:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GMMHMM(covariance_type='full',
covars_prior=array([[[[0., 0., 0., 0.],
[0., 0., 0., 0.],
[0.,...n_components=2,
n_iter=100, n_mix=2, random_state=1234,
weights_prior=array([[1., 1.],
[1., 1.]]))
X = array([[0.00992058, 0.44151747, 0.5395124 , 0.40644765],
[0.00962487, 0.45613006, 0.52375835, 0.3899082 ],
...00656536, 0.39309287, 0.60035396, 0.41596898],
[0.00693208, 0.37821782, 0.59813255, 0.4394344 ]], dtype=float32)
def _fit_log(self, X):
log_frameprob = self._compute_log_likelihood(X)
> log_prob, fwdlattice = _hmmc.forward_log(
self.startprob_, self.transmat_, log_frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:864: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
_____ TestGMMHMMWithSphericalCovars.test_score_samples_and_decode[scaling] _____
self = <hmmlearn.tests.test_gmm_hmm_new.TestGMMHMMWithSphericalCovars object at 0xffff94aa4550>
implementation = 'scaling'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_score_samples_and_decode(self, implementation):
n_samples = 1000
h = self.new_hmm(implementation)
X, states = h.sample(n_samples)
> _ll, posteriors = h.score_samples(X)
hmmlearn/tests/test_gmm_hmm_new.py:121:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:201: in score_samples
return self._score(X, lengths, compute_posteriors=True)
hmmlearn/base.py:244: in _score
return impl(
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GMMHMM(covariance_type='spherical',
covars_prior=array([[-2., -2.],
[-2., -2.],
[-2., -2.]]),
...state=RandomState(MT19937) at 0xFFFF94790A40,
weights_prior=array([[1., 1.],
[1., 1.],
[1., 1.]]))
X = array([[ 3.10434458, 4.41854888],
[ 5.6930133 , 2.79308255],
[34.40086102, 37.4658949 ],
...,
[ 3.70365171, 3.71508656],
[ 1.74345864, 3.15260967],
[ 6.91178766, 9.37996936]])
lengths = None
def _score_scaling(self, X, lengths=None, *, compute_posteriors):
log_prob = 0
sub_posteriors = [np.empty((0, self.n_components))]
for sub_X in _utils.split_X_lengths(X, lengths):
frameprob = self._compute_likelihood(sub_X)
> log_probij, fwdlattice, scaling_factors = _hmmc.forward_scaling(
self.startprob_, self.transmat_, frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:272: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
_______ TestGMMHMMWithSphericalCovars.test_score_samples_and_decode[log] _______
self = <hmmlearn.tests.test_gmm_hmm_new.TestGMMHMMWithSphericalCovars object at 0xffff94e9a3f0>
implementation = 'log'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_score_samples_and_decode(self, implementation):
n_samples = 1000
h = self.new_hmm(implementation)
X, states = h.sample(n_samples)
> _ll, posteriors = h.score_samples(X)
hmmlearn/tests/test_gmm_hmm_new.py:121:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:201: in score_samples
return self._score(X, lengths, compute_posteriors=True)
hmmlearn/base.py:244: in _score
return impl(
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GMMHMM(covariance_type='spherical',
covars_prior=array([[-2., -2.],
[-2., -2.],
[-2., -2.]]),
...state=RandomState(MT19937) at 0xFFFF94790B40,
weights_prior=array([[1., 1.],
[1., 1.],
[1., 1.]]))
X = array([[ 3.10434458, 4.41854888],
[ 5.6930133 , 2.79308255],
[34.40086102, 37.4658949 ],
...,
[ 3.70365171, 3.71508656],
[ 1.74345864, 3.15260967],
[ 6.91178766, 9.37996936]])
lengths = None
def _score_log(self, X, lengths=None, *, compute_posteriors):
"""
Compute the log probability under the model, as well as posteriors if
*compute_posteriors* is True (otherwise, an empty array is returned
for the latter).
"""
log_prob = 0
sub_posteriors = [np.empty((0, self.n_components))]
for sub_X in _utils.split_X_lengths(X, lengths):
log_frameprob = self._compute_log_likelihood(sub_X)
> log_probij, fwdlattice = _hmmc.forward_log(
self.startprob_, self.transmat_, log_frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:257: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
_______________ TestGMMHMMWithSphericalCovars.test_fit[scaling] ________________
self = <hmmlearn.tests.test_gmm_hmm_new.TestGMMHMMWithSphericalCovars object at 0xffff94e9a120>
implementation = 'scaling'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_fit(self, implementation):
n_iter = 5
n_samples = 1000
lengths = None
h = self.new_hmm(implementation)
X, _state_sequence = h.sample(n_samples)
# Mess up the parameters and see if we can re-learn them.
covs0, means0, priors0, trans0, weights0 = prep_params(
self.n_components, self.n_mix, self.n_features,
self.covariance_type, self.low, self.high,
np.random.RandomState(15)
)
h.covars_ = covs0 * 100
h.means_ = means0
h.startprob_ = priors0
h.transmat_ = trans0
h.weights_ = weights0
> assert_log_likelihood_increasing(h, X, lengths, n_iter)
hmmlearn/tests/test_gmm_hmm_new.py:146:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/tests/__init__.py:46: in assert_log_likelihood_increasing
h.fit(X, lengths=lengths)
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GMMHMM(covariance_type='spherical',
covars_prior=array([[-2., -2.],
[-2., -2.],
[-2., -2.]]),
...state=RandomState(MT19937) at 0xFFFF94887940,
weights_prior=array([[1., 1.],
[1., 1.],
[1., 1.]]))
X = array([[ 3.10434458, 4.41854888],
[ 5.6930133 , 2.79308255],
[34.40086102, 37.4658949 ],
...,
[ 3.70365171, 3.71508656],
[ 1.74345864, 3.15260967],
[ 6.91178766, 9.37996936]])
def _fit_scaling(self, X):
frameprob = self._compute_likelihood(X)
> log_prob, fwdlattice, scaling_factors = _hmmc.forward_scaling(
self.startprob_, self.transmat_, frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:855: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
_________________ TestGMMHMMWithSphericalCovars.test_fit[log] __________________
self = <hmmlearn.tests.test_gmm_hmm_new.TestGMMHMMWithSphericalCovars object at 0xffff94a91630>
implementation = 'log'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_fit(self, implementation):
n_iter = 5
n_samples = 1000
lengths = None
h = self.new_hmm(implementation)
X, _state_sequence = h.sample(n_samples)
# Mess up the parameters and see if we can re-learn them.
covs0, means0, priors0, trans0, weights0 = prep_params(
self.n_components, self.n_mix, self.n_features,
self.covariance_type, self.low, self.high,
np.random.RandomState(15)
)
h.covars_ = covs0 * 100
h.means_ = means0
h.startprob_ = priors0
h.transmat_ = trans0
h.weights_ = weights0
> assert_log_likelihood_increasing(h, X, lengths, n_iter)
hmmlearn/tests/test_gmm_hmm_new.py:146:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/tests/__init__.py:46: in assert_log_likelihood_increasing
h.fit(X, lengths=lengths)
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GMMHMM(covariance_type='spherical',
covars_prior=array([[-2., -2.],
[-2., -2.],
[-2., -2.]]),
...state=RandomState(MT19937) at 0xFFFF94884240,
weights_prior=array([[1., 1.],
[1., 1.],
[1., 1.]]))
X = array([[ 3.10434458, 4.41854888],
[ 5.6930133 , 2.79308255],
[34.40086102, 37.4658949 ],
...,
[ 3.70365171, 3.71508656],
[ 1.74345864, 3.15260967],
[ 6.91178766, 9.37996936]])
def _fit_log(self, X):
log_frameprob = self._compute_log_likelihood(X)
> log_prob, fwdlattice = _hmmc.forward_log(
self.startprob_, self.transmat_, log_frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:864: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
_________ TestGMMHMMWithSphericalCovars.test_fit_sparse_data[scaling] __________
self = <hmmlearn.tests.test_gmm_hmm_new.TestGMMHMMWithSphericalCovars object at 0xffff94a91b70>
implementation = 'scaling'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_fit_sparse_data(self, implementation):
n_samples = 1000
h = self.new_hmm(implementation)
h.means_ *= 1000 # this will put gaussians very far apart
X, _states = h.sample(n_samples)
# this should not raise
# "ValueError: array must not contain infs or NaNs"
h._init(X, [1000])
> h.fit(X)
hmmlearn/tests/test_gmm_hmm_new.py:158:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GMMHMM(covariance_type='spherical',
covars_prior=array([[-2., -2.],
[-2., -2.],
[-2., -2.]]),
...state=RandomState(MT19937) at 0xFFFF9488C940,
weights_prior=array([[1., 1.],
[1., 1.],
[1., 1.]]))
X = array([[ 3999.84865619, 3069.23303621],
[ 4002.43732491, 3067.60756989],
[34695.58244203, 37696.508278... [ 4000.44796333, 3068.5295739 ],
[ 3998.48777025, 3067.967097 ],
[ 6450.19377286, 7478.35563 ]])
def _fit_scaling(self, X):
frameprob = self._compute_likelihood(X)
> log_prob, fwdlattice, scaling_factors = _hmmc.forward_scaling(
self.startprob_, self.transmat_, frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:855: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
------------------------------ Captured log call -------------------------------
WARNING hmmlearn.base:base.py:514 Even though the 'startprob_' attribute is set, it will be overwritten during initialization because 'init_params' contains 's'
WARNING hmmlearn.base:base.py:514 Even though the 'transmat_' attribute is set, it will be overwritten during initialization because 'init_params' contains 't'
WARNING hmmlearn.base:base.py:514 Even though the 'weights_' attribute is set, it will be overwritten during initialization because 'init_params' contains 'w'
WARNING hmmlearn.base:base.py:514 Even though the 'means_' attribute is set, it will be overwritten during initialization because 'init_params' contains 'm'
WARNING hmmlearn.base:base.py:514 Even though the 'covars_' attribute is set, it will be overwritten during initialization because 'init_params' contains 'c'
WARNING hmmlearn.base:base.py:514 Even though the 'startprob_' attribute is set, it will be overwritten during initialization because 'init_params' contains 's'
WARNING hmmlearn.base:base.py:514 Even though the 'transmat_' attribute is set, it will be overwritten during initialization because 'init_params' contains 't'
WARNING hmmlearn.base:base.py:514 Even though the 'weights_' attribute is set, it will be overwritten during initialization because 'init_params' contains 'w'
WARNING hmmlearn.base:base.py:514 Even though the 'means_' attribute is set, it will be overwritten during initialization because 'init_params' contains 'm'
WARNING hmmlearn.base:base.py:514 Even though the 'covars_' attribute is set, it will be overwritten during initialization because 'init_params' contains 'c'
___________ TestGMMHMMWithSphericalCovars.test_fit_sparse_data[log] ____________
self = <hmmlearn.tests.test_gmm_hmm_new.TestGMMHMMWithSphericalCovars object at 0xffff94ada4e0>
implementation = 'log'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_fit_sparse_data(self, implementation):
n_samples = 1000
h = self.new_hmm(implementation)
h.means_ *= 1000 # this will put gaussians very far apart
X, _states = h.sample(n_samples)
# this should not raise
# "ValueError: array must not contain infs or NaNs"
h._init(X, [1000])
> h.fit(X)
hmmlearn/tests/test_gmm_hmm_new.py:158:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GMMHMM(covariance_type='spherical',
covars_prior=array([[-2., -2.],
[-2., -2.],
[-2., -2.]]),
...state=RandomState(MT19937) at 0xFFFF94790F40,
weights_prior=array([[1., 1.],
[1., 1.],
[1., 1.]]))
X = array([[ 3999.84865619, 3069.23303621],
[ 4002.43732491, 3067.60756989],
[34695.58244203, 37696.508278... [ 4000.44796333, 3068.5295739 ],
[ 3998.48777025, 3067.967097 ],
[ 6450.19377286, 7478.35563 ]])
def _fit_log(self, X):
log_frameprob = self._compute_log_likelihood(X)
> log_prob, fwdlattice = _hmmc.forward_log(
self.startprob_, self.transmat_, log_frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:864: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
------------------------------ Captured log call -------------------------------
WARNING hmmlearn.base:base.py:514 Even though the 'startprob_' attribute is set, it will be overwritten during initialization because 'init_params' contains 's'
WARNING hmmlearn.base:base.py:514 Even though the 'transmat_' attribute is set, it will be overwritten during initialization because 'init_params' contains 't'
WARNING hmmlearn.base:base.py:514 Even though the 'weights_' attribute is set, it will be overwritten during initialization because 'init_params' contains 'w'
WARNING hmmlearn.base:base.py:514 Even though the 'means_' attribute is set, it will be overwritten during initialization because 'init_params' contains 'm'
WARNING hmmlearn.base:base.py:514 Even though the 'covars_' attribute is set, it will be overwritten during initialization because 'init_params' contains 'c'
WARNING hmmlearn.base:base.py:514 Even though the 'startprob_' attribute is set, it will be overwritten during initialization because 'init_params' contains 's'
WARNING hmmlearn.base:base.py:514 Even though the 'transmat_' attribute is set, it will be overwritten during initialization because 'init_params' contains 't'
WARNING hmmlearn.base:base.py:514 Even though the 'weights_' attribute is set, it will be overwritten during initialization because 'init_params' contains 'w'
WARNING hmmlearn.base:base.py:514 Even though the 'means_' attribute is set, it will be overwritten during initialization because 'init_params' contains 'm'
WARNING hmmlearn.base:base.py:514 Even though the 'covars_' attribute is set, it will be overwritten during initialization because 'init_params' contains 'c'
____________ TestGMMHMMWithSphericalCovars.test_criterion[scaling] _____________
self = <hmmlearn.tests.test_gmm_hmm_new.TestGMMHMMWithSphericalCovars object at 0xffff94a84260>
implementation = 'scaling'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_criterion(self, implementation):
random_state = check_random_state(2013)
m1 = self.new_hmm(implementation)
# Spread the means out to make this easier
m1.means_ *= 10
X, _ = m1.sample(4000, random_state=random_state)
aic = []
bic = []
ns = [2, 3, 4, 5]
for n in ns:
h = GMMHMM(n, n_mix=2, covariance_type=self.covariance_type,
random_state=random_state, implementation=implementation)
> h.fit(X)
hmmlearn/tests/test_gmm_hmm_new.py:194:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GMMHMM(covariance_type='spherical',
covars_prior=array([[-2., -2.],
[-2., -2.]]),
covars_weight=a...2,
random_state=RandomState(MT19937) at 0xFFFF9488F540,
weights_prior=array([[1., 1.],
[1., 1.]]))
X = array([[ 40.65624455, 29.16802829],
[242.52436414, 193.40204501],
[347.23482679, 377.48914412],
...,
[ 38.12816493, 29.67601719],
[241.05090945, 192.22538034],
[240.19846475, 193.26742897]])
def _fit_scaling(self, X):
frameprob = self._compute_likelihood(X)
> log_prob, fwdlattice, scaling_factors = _hmmc.forward_scaling(
self.startprob_, self.transmat_, frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:855: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
______________ TestGMMHMMWithSphericalCovars.test_criterion[log] _______________
self = <hmmlearn.tests.test_gmm_hmm_new.TestGMMHMMWithSphericalCovars object at 0xffff94a843c0>
implementation = 'log'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_criterion(self, implementation):
random_state = check_random_state(2013)
m1 = self.new_hmm(implementation)
# Spread the means out to make this easier
m1.means_ *= 10
X, _ = m1.sample(4000, random_state=random_state)
aic = []
bic = []
ns = [2, 3, 4, 5]
for n in ns:
h = GMMHMM(n, n_mix=2, covariance_type=self.covariance_type,
random_state=random_state, implementation=implementation)
> h.fit(X)
hmmlearn/tests/test_gmm_hmm_new.py:194:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GMMHMM(covariance_type='spherical',
covars_prior=array([[-2., -2.],
[-2., -2.]]),
covars_weight=a...2,
random_state=RandomState(MT19937) at 0xFFFF94884D40,
weights_prior=array([[1., 1.],
[1., 1.]]))
X = array([[ 40.65624455, 29.16802829],
[242.52436414, 193.40204501],
[347.23482679, 377.48914412],
...,
[ 38.12816493, 29.67601719],
[241.05090945, 192.22538034],
[240.19846475, 193.26742897]])
def _fit_log(self, X):
log_frameprob = self._compute_log_likelihood(X)
> log_prob, fwdlattice = _hmmc.forward_log(
self.startprob_, self.transmat_, log_frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:864: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
_______ TestGMMHMMWithDiagCovars.test_score_samples_and_decode[scaling] ________
self = <hmmlearn.tests.test_gmm_hmm_new.TestGMMHMMWithDiagCovars object at 0xffff94aa4750>
implementation = 'scaling'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_score_samples_and_decode(self, implementation):
n_samples = 1000
h = self.new_hmm(implementation)
X, states = h.sample(n_samples)
> _ll, posteriors = h.score_samples(X)
hmmlearn/tests/test_gmm_hmm_new.py:121:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:201: in score_samples
return self._score(X, lengths, compute_posteriors=True)
hmmlearn/base.py:244: in _score
return impl(
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GMMHMM(covars_prior=array([[[-1.5, -1.5],
[-1.5, -1.5]],
[[-1.5, -1.5],
[-1.5, -1.5]],
...state=RandomState(MT19937) at 0xFFFF94887940,
weights_prior=array([[1., 1.],
[1., 1.],
[1., 1.]]))
X = array([[ 9.54552606, 7.2016523 ],
[28.42609795, 28.94775636],
[ 3.62062358, 2.11526678],
...,
[ 4.11095304, -1.71284803],
[ 6.91178766, 8.51698046],
[ 7.22860929, 6.57244198]])
lengths = None
def _score_scaling(self, X, lengths=None, *, compute_posteriors):
log_prob = 0
sub_posteriors = [np.empty((0, self.n_components))]
for sub_X in _utils.split_X_lengths(X, lengths):
frameprob = self._compute_likelihood(sub_X)
> log_probij, fwdlattice, scaling_factors = _hmmc.forward_scaling(
self.startprob_, self.transmat_, frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:272: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
_________ TestGMMHMMWithDiagCovars.test_score_samples_and_decode[log] __________
self = <hmmlearn.tests.test_gmm_hmm_new.TestGMMHMMWithDiagCovars object at 0xffff94e9a210>
implementation = 'log'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_score_samples_and_decode(self, implementation):
n_samples = 1000
h = self.new_hmm(implementation)
X, states = h.sample(n_samples)
> _ll, posteriors = h.score_samples(X)
hmmlearn/tests/test_gmm_hmm_new.py:121:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:201: in score_samples
return self._score(X, lengths, compute_posteriors=True)
hmmlearn/base.py:244: in _score
return impl(
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GMMHMM(covars_prior=array([[[-1.5, -1.5],
[-1.5, -1.5]],
[[-1.5, -1.5],
[-1.5, -1.5]],
...state=RandomState(MT19937) at 0xFFFF94790840,
weights_prior=array([[1., 1.],
[1., 1.],
[1., 1.]]))
X = array([[ 9.54552606, 7.2016523 ],
[28.42609795, 28.94775636],
[ 3.62062358, 2.11526678],
...,
[ 4.11095304, -1.71284803],
[ 6.91178766, 8.51698046],
[ 7.22860929, 6.57244198]])
lengths = None
def _score_log(self, X, lengths=None, *, compute_posteriors):
"""
Compute the log probability under the model, as well as posteriors if
*compute_posteriors* is True (otherwise, an empty array is returned
for the latter).
"""
log_prob = 0
sub_posteriors = [np.empty((0, self.n_components))]
for sub_X in _utils.split_X_lengths(X, lengths):
log_frameprob = self._compute_log_likelihood(sub_X)
> log_probij, fwdlattice = _hmmc.forward_log(
self.startprob_, self.transmat_, log_frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:257: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
__________________ TestGMMHMMWithDiagCovars.test_fit[scaling] __________________
self = <hmmlearn.tests.test_gmm_hmm_new.TestGMMHMMWithDiagCovars object at 0xffff94e9a7b0>
implementation = 'scaling'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_fit(self, implementation):
n_iter = 5
n_samples = 1000
lengths = None
h = self.new_hmm(implementation)
X, _state_sequence = h.sample(n_samples)
# Mess up the parameters and see if we can re-learn them.
covs0, means0, priors0, trans0, weights0 = prep_params(
self.n_components, self.n_mix, self.n_features,
self.covariance_type, self.low, self.high,
np.random.RandomState(15)
)
h.covars_ = covs0 * 100
h.means_ = means0
h.startprob_ = priors0
h.transmat_ = trans0
h.weights_ = weights0
> assert_log_likelihood_increasing(h, X, lengths, n_iter)
hmmlearn/tests/test_gmm_hmm_new.py:146:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/tests/__init__.py:46: in assert_log_likelihood_increasing
h.fit(X, lengths=lengths)
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GMMHMM(covars_prior=array([[[-1.5, -1.5],
[-1.5, -1.5]],
[[-1.5, -1.5],
[-1.5, -1.5]],
...state=RandomState(MT19937) at 0xFFFF94790D40,
weights_prior=array([[1., 1.],
[1., 1.],
[1., 1.]]))
X = array([[ 9.54552606, 7.2016523 ],
[28.42609795, 28.94775636],
[ 3.62062358, 2.11526678],
...,
[ 4.11095304, -1.71284803],
[ 6.91178766, 8.51698046],
[ 7.22860929, 6.57244198]])
def _fit_scaling(self, X):
frameprob = self._compute_likelihood(X)
> log_prob, fwdlattice, scaling_factors = _hmmc.forward_scaling(
self.startprob_, self.transmat_, frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:855: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
____________________ TestGMMHMMWithDiagCovars.test_fit[log] ____________________
self = <hmmlearn.tests.test_gmm_hmm_new.TestGMMHMMWithDiagCovars object at 0xffff94a92f90>
implementation = 'log'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_fit(self, implementation):
n_iter = 5
n_samples = 1000
lengths = None
h = self.new_hmm(implementation)
X, _state_sequence = h.sample(n_samples)
# Mess up the parameters and see if we can re-learn them.
covs0, means0, priors0, trans0, weights0 = prep_params(
self.n_components, self.n_mix, self.n_features,
self.covariance_type, self.low, self.high,
np.random.RandomState(15)
)
h.covars_ = covs0 * 100
h.means_ = means0
h.startprob_ = priors0
h.transmat_ = trans0
h.weights_ = weights0
> assert_log_likelihood_increasing(h, X, lengths, n_iter)
hmmlearn/tests/test_gmm_hmm_new.py:146:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/tests/__init__.py:46: in assert_log_likelihood_increasing
h.fit(X, lengths=lengths)
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GMMHMM(covars_prior=array([[[-1.5, -1.5],
[-1.5, -1.5]],
[[-1.5, -1.5],
[-1.5, -1.5]],
...state=RandomState(MT19937) at 0xFFFF94790A40,
weights_prior=array([[1., 1.],
[1., 1.],
[1., 1.]]))
X = array([[ 9.54552606, 7.2016523 ],
[28.42609795, 28.94775636],
[ 3.62062358, 2.11526678],
...,
[ 4.11095304, -1.71284803],
[ 6.91178766, 8.51698046],
[ 7.22860929, 6.57244198]])
def _fit_log(self, X):
log_frameprob = self._compute_log_likelihood(X)
> log_prob, fwdlattice = _hmmc.forward_log(
self.startprob_, self.transmat_, log_frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:864: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
____________ TestGMMHMMWithDiagCovars.test_fit_sparse_data[scaling] ____________
self = <hmmlearn.tests.test_gmm_hmm_new.TestGMMHMMWithDiagCovars object at 0xffff94a934d0>
implementation = 'scaling'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_fit_sparse_data(self, implementation):
n_samples = 1000
h = self.new_hmm(implementation)
h.means_ *= 1000 # this will put gaussians very far apart
X, _states = h.sample(n_samples)
# this should not raise
# "ValueError: array must not contain infs or NaNs"
h._init(X, [1000])
> h.fit(X)
hmmlearn/tests/test_gmm_hmm_new.py:158:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GMMHMM(covars_prior=array([[[-1.5, -1.5],
[-1.5, -1.5]],
[[-1.5, -1.5],
[-1.5, -1.5]],
...state=RandomState(MT19937) at 0xFFFF94887140,
weights_prior=array([[1., 1.],
[1., 1.],
[1., 1.]]))
X = array([[ 6452.82751127, 7476.17731294],
[29346.43680528, 29439.90571357],
[ 4000.36493519, 3066.929754... [ 4000.85526465, 3063.10163931],
[ 6450.19377286, 7477.49264111],
[ 6450.51059449, 7475.54810263]])
def _fit_scaling(self, X):
frameprob = self._compute_likelihood(X)
> log_prob, fwdlattice, scaling_factors = _hmmc.forward_scaling(
self.startprob_, self.transmat_, frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:855: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
------------------------------ Captured log call -------------------------------
WARNING hmmlearn.base:base.py:514 Even though the 'startprob_' attribute is set, it will be overwritten during initialization because 'init_params' contains 's'
WARNING hmmlearn.base:base.py:514 Even though the 'transmat_' attribute is set, it will be overwritten during initialization because 'init_params' contains 't'
WARNING hmmlearn.base:base.py:514 Even though the 'weights_' attribute is set, it will be overwritten during initialization because 'init_params' contains 'w'
WARNING hmmlearn.base:base.py:514 Even though the 'means_' attribute is set, it will be overwritten during initialization because 'init_params' contains 'm'
WARNING hmmlearn.base:base.py:514 Even though the 'covars_' attribute is set, it will be overwritten during initialization because 'init_params' contains 'c'
WARNING hmmlearn.base:base.py:514 Even though the 'startprob_' attribute is set, it will be overwritten during initialization because 'init_params' contains 's'
WARNING hmmlearn.base:base.py:514 Even though the 'transmat_' attribute is set, it will be overwritten during initialization because 'init_params' contains 't'
WARNING hmmlearn.base:base.py:514 Even though the 'weights_' attribute is set, it will be overwritten during initialization because 'init_params' contains 'w'
WARNING hmmlearn.base:base.py:514 Even though the 'means_' attribute is set, it will be overwritten during initialization because 'init_params' contains 'm'
WARNING hmmlearn.base:base.py:514 Even though the 'covars_' attribute is set, it will be overwritten during initialization because 'init_params' contains 'c'
______________ TestGMMHMMWithDiagCovars.test_fit_sparse_data[log] ______________
self = <hmmlearn.tests.test_gmm_hmm_new.TestGMMHMMWithDiagCovars object at 0xffff94ada750>
implementation = 'log'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_fit_sparse_data(self, implementation):
n_samples = 1000
h = self.new_hmm(implementation)
h.means_ *= 1000 # this will put gaussians very far apart
X, _states = h.sample(n_samples)
# this should not raise
# "ValueError: array must not contain infs or NaNs"
h._init(X, [1000])
> h.fit(X)
hmmlearn/tests/test_gmm_hmm_new.py:158:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GMMHMM(covars_prior=array([[[-1.5, -1.5],
[-1.5, -1.5]],
[[-1.5, -1.5],
[-1.5, -1.5]],
...state=RandomState(MT19937) at 0xFFFF94884240,
weights_prior=array([[1., 1.],
[1., 1.],
[1., 1.]]))
X = array([[ 6452.82751127, 7476.17731294],
[29346.43680528, 29439.90571357],
[ 4000.36493519, 3066.929754... [ 4000.85526465, 3063.10163931],
[ 6450.19377286, 7477.49264111],
[ 6450.51059449, 7475.54810263]])
def _fit_log(self, X):
log_frameprob = self._compute_log_likelihood(X)
> log_prob, fwdlattice = _hmmc.forward_log(
self.startprob_, self.transmat_, log_frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:864: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
------------------------------ Captured log call -------------------------------
WARNING hmmlearn.base:base.py:514 Even though the 'startprob_' attribute is set, it will be overwritten during initialization because 'init_params' contains 's'
WARNING hmmlearn.base:base.py:514 Even though the 'transmat_' attribute is set, it will be overwritten during initialization because 'init_params' contains 't'
WARNING hmmlearn.base:base.py:514 Even though the 'weights_' attribute is set, it will be overwritten during initialization because 'init_params' contains 'w'
WARNING hmmlearn.base:base.py:514 Even though the 'means_' attribute is set, it will be overwritten during initialization because 'init_params' contains 'm'
WARNING hmmlearn.base:base.py:514 Even though the 'covars_' attribute is set, it will be overwritten during initialization because 'init_params' contains 'c'
WARNING hmmlearn.base:base.py:514 Even though the 'startprob_' attribute is set, it will be overwritten during initialization because 'init_params' contains 's'
WARNING hmmlearn.base:base.py:514 Even though the 'transmat_' attribute is set, it will be overwritten during initialization because 'init_params' contains 't'
WARNING hmmlearn.base:base.py:514 Even though the 'weights_' attribute is set, it will be overwritten during initialization because 'init_params' contains 'w'
WARNING hmmlearn.base:base.py:514 Even though the 'means_' attribute is set, it will be overwritten during initialization because 'init_params' contains 'm'
WARNING hmmlearn.base:base.py:514 Even though the 'covars_' attribute is set, it will be overwritten during initialization because 'init_params' contains 'c'
_______________ TestGMMHMMWithDiagCovars.test_criterion[scaling] _______________
self = <hmmlearn.tests.test_gmm_hmm_new.TestGMMHMMWithDiagCovars object at 0xffff94a84d60>
implementation = 'scaling'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_criterion(self, implementation):
random_state = check_random_state(2013)
m1 = self.new_hmm(implementation)
# Spread the means out to make this easier
m1.means_ *= 10
X, _ = m1.sample(4000, random_state=random_state)
aic = []
bic = []
ns = [2, 3, 4, 5]
for n in ns:
h = GMMHMM(n, n_mix=2, covariance_type=self.covariance_type,
random_state=random_state, implementation=implementation)
> h.fit(X)
hmmlearn/tests/test_gmm_hmm_new.py:194:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GMMHMM(covars_prior=array([[[-1.5, -1.5],
[-1.5, -1.5]],
[[-1.5, -1.5],
[-1.5, -1.5]]]),
...2,
random_state=RandomState(MT19937) at 0xFFFF94790140,
weights_prior=array([[1., 1.],
[1., 1.]]))
X = array([[ 38.0423023 , 32.05291123],
[241.79570647, 194.37999672],
[294.33562038, 295.51745038],
...,
[ 61.05939775, 73.76171462],
[241.14804197, 192.17496613],
[241.72161057, 190.89927936]])
def _fit_scaling(self, X):
frameprob = self._compute_likelihood(X)
> log_prob, fwdlattice, scaling_factors = _hmmc.forward_scaling(
self.startprob_, self.transmat_, frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:855: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
_________________ TestGMMHMMWithDiagCovars.test_criterion[log] _________________
self = <hmmlearn.tests.test_gmm_hmm_new.TestGMMHMMWithDiagCovars object at 0xffff94a84e10>
implementation = 'log'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_criterion(self, implementation):
random_state = check_random_state(2013)
m1 = self.new_hmm(implementation)
# Spread the means out to make this easier
m1.means_ *= 10
X, _ = m1.sample(4000, random_state=random_state)
aic = []
bic = []
ns = [2, 3, 4, 5]
for n in ns:
h = GMMHMM(n, n_mix=2, covariance_type=self.covariance_type,
random_state=random_state, implementation=implementation)
> h.fit(X)
hmmlearn/tests/test_gmm_hmm_new.py:194:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GMMHMM(covars_prior=array([[[-1.5, -1.5],
[-1.5, -1.5]],
[[-1.5, -1.5],
[-1.5, -1.5]]]),
...2,
random_state=RandomState(MT19937) at 0xFFFF94791940,
weights_prior=array([[1., 1.],
[1., 1.]]))
X = array([[ 38.0423023 , 32.05291123],
[241.79570647, 194.37999672],
[294.33562038, 295.51745038],
...,
[ 61.05939775, 73.76171462],
[241.14804197, 192.17496613],
[241.72161057, 190.89927936]])
def _fit_log(self, X):
log_frameprob = self._compute_log_likelihood(X)
> log_prob, fwdlattice = _hmmc.forward_log(
self.startprob_, self.transmat_, log_frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:864: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
_______ TestGMMHMMWithTiedCovars.test_score_samples_and_decode[scaling] ________
self = <hmmlearn.tests.test_gmm_hmm_new.TestGMMHMMWithTiedCovars object at 0xffff94aa4950>
implementation = 'scaling'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_score_samples_and_decode(self, implementation):
n_samples = 1000
h = self.new_hmm(implementation)
X, states = h.sample(n_samples)
> _ll, posteriors = h.score_samples(X)
hmmlearn/tests/test_gmm_hmm_new.py:121:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:201: in score_samples
return self._score(X, lengths, compute_posteriors=True)
hmmlearn/base.py:244: in _score
return impl(
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GMMHMM(covariance_type='tied',
covars_prior=array([[[0., 0.],
[0., 0.]],
[[0., 0.],
[0....state=RandomState(MT19937) at 0xFFFF9488E240,
weights_prior=array([[1., 1.],
[1., 1.],
[1., 1.]]))
X = array([[ 4.82987073, 10.88003985],
[30.24211141, 29.22255983],
[ 4.42110145, 2.15841708],
...,
[ 6.15358777, -1.47582217],
[ 6.23813069, 7.99872158],
[ 6.0189001 , 8.3217492 ]])
lengths = None
def _score_scaling(self, X, lengths=None, *, compute_posteriors):
log_prob = 0
sub_posteriors = [np.empty((0, self.n_components))]
for sub_X in _utils.split_X_lengths(X, lengths):
frameprob = self._compute_likelihood(sub_X)
> log_probij, fwdlattice, scaling_factors = _hmmc.forward_scaling(
self.startprob_, self.transmat_, frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:272: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
_________ TestGMMHMMWithTiedCovars.test_score_samples_and_decode[log] __________
self = <hmmlearn.tests.test_gmm_hmm_new.TestGMMHMMWithTiedCovars object at 0xffff94e9a8a0>
implementation = 'log'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_score_samples_and_decode(self, implementation):
n_samples = 1000
h = self.new_hmm(implementation)
X, states = h.sample(n_samples)
> _ll, posteriors = h.score_samples(X)
hmmlearn/tests/test_gmm_hmm_new.py:121:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:201: in score_samples
return self._score(X, lengths, compute_posteriors=True)
hmmlearn/base.py:244: in _score
return impl(
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GMMHMM(covariance_type='tied',
covars_prior=array([[[0., 0.],
[0., 0.]],
[[0., 0.],
[0....state=RandomState(MT19937) at 0xFFFF9488F040,
weights_prior=array([[1., 1.],
[1., 1.],
[1., 1.]]))
X = array([[ 4.82987073, 10.88003985],
[30.24211141, 29.22255983],
[ 4.42110145, 2.15841708],
...,
[ 6.15358777, -1.47582217],
[ 6.23813069, 7.99872158],
[ 6.0189001 , 8.3217492 ]])
lengths = None
def _score_log(self, X, lengths=None, *, compute_posteriors):
"""
Compute the log probability under the model, as well as posteriors if
*compute_posteriors* is True (otherwise, an empty array is returned
for the latter).
"""
log_prob = 0
sub_posteriors = [np.empty((0, self.n_components))]
for sub_X in _utils.split_X_lengths(X, lengths):
log_frameprob = self._compute_log_likelihood(sub_X)
> log_probij, fwdlattice = _hmmc.forward_log(
self.startprob_, self.transmat_, log_frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:257: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
__________________ TestGMMHMMWithTiedCovars.test_fit[scaling] __________________
self = <hmmlearn.tests.test_gmm_hmm_new.TestGMMHMMWithTiedCovars object at 0xffff94e9a990>
implementation = 'scaling'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_fit(self, implementation):
n_iter = 5
n_samples = 1000
lengths = None
h = self.new_hmm(implementation)
X, _state_sequence = h.sample(n_samples)
# Mess up the parameters and see if we can re-learn them.
covs0, means0, priors0, trans0, weights0 = prep_params(
self.n_components, self.n_mix, self.n_features,
self.covariance_type, self.low, self.high,
np.random.RandomState(15)
)
h.covars_ = covs0 * 100
h.means_ = means0
h.startprob_ = priors0
h.transmat_ = trans0
h.weights_ = weights0
> assert_log_likelihood_increasing(h, X, lengths, n_iter)
hmmlearn/tests/test_gmm_hmm_new.py:146:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/tests/__init__.py:46: in assert_log_likelihood_increasing
h.fit(X, lengths=lengths)
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GMMHMM(covariance_type='tied',
covars_prior=array([[[0., 0.],
[0., 0.]],
[[0., 0.],
[0....state=RandomState(MT19937) at 0xFFFF94AA6B40,
weights_prior=array([[1., 1.],
[1., 1.],
[1., 1.]]))
X = array([[ 4.82987073, 10.88003985],
[30.24211141, 29.22255983],
[ 4.42110145, 2.15841708],
...,
[ 6.15358777, -1.47582217],
[ 6.23813069, 7.99872158],
[ 6.0189001 , 8.3217492 ]])
def _fit_scaling(self, X):
frameprob = self._compute_likelihood(X)
> log_prob, fwdlattice, scaling_factors = _hmmc.forward_scaling(
self.startprob_, self.transmat_, frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:855: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
____________________ TestGMMHMMWithTiedCovars.test_fit[log] ____________________
self = <hmmlearn.tests.test_gmm_hmm_new.TestGMMHMMWithTiedCovars object at 0xffff94a6c9f0>
implementation = 'log'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_fit(self, implementation):
n_iter = 5
n_samples = 1000
lengths = None
h = self.new_hmm(implementation)
X, _state_sequence = h.sample(n_samples)
# Mess up the parameters and see if we can re-learn them.
covs0, means0, priors0, trans0, weights0 = prep_params(
self.n_components, self.n_mix, self.n_features,
self.covariance_type, self.low, self.high,
np.random.RandomState(15)
)
h.covars_ = covs0 * 100
h.means_ = means0
h.startprob_ = priors0
h.transmat_ = trans0
h.weights_ = weights0
> assert_log_likelihood_increasing(h, X, lengths, n_iter)
hmmlearn/tests/test_gmm_hmm_new.py:146:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/tests/__init__.py:46: in assert_log_likelihood_increasing
h.fit(X, lengths=lengths)
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GMMHMM(covariance_type='tied',
covars_prior=array([[[0., 0.],
[0., 0.]],
[[0., 0.],
[0....state=RandomState(MT19937) at 0xFFFF94AA6140,
weights_prior=array([[1., 1.],
[1., 1.],
[1., 1.]]))
X = array([[ 4.82987073, 10.88003985],
[30.24211141, 29.22255983],
[ 4.42110145, 2.15841708],
...,
[ 6.15358777, -1.47582217],
[ 6.23813069, 7.99872158],
[ 6.0189001 , 8.3217492 ]])
def _fit_log(self, X):
log_frameprob = self._compute_log_likelihood(X)
> log_prob, fwdlattice = _hmmc.forward_log(
self.startprob_, self.transmat_, log_frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:864: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
____________ TestGMMHMMWithTiedCovars.test_fit_sparse_data[scaling] ____________
self = <hmmlearn.tests.test_gmm_hmm_new.TestGMMHMMWithTiedCovars object at 0xffff94a6cf30>
implementation = 'scaling'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_fit_sparse_data(self, implementation):
n_samples = 1000
h = self.new_hmm(implementation)
h.means_ *= 1000 # this will put gaussians very far apart
X, _states = h.sample(n_samples)
# this should not raise
# "ValueError: array must not contain infs or NaNs"
h._init(X, [1000])
> h.fit(X)
hmmlearn/tests/test_gmm_hmm_new.py:158:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GMMHMM(covariance_type='tied',
covars_prior=array([[[0., 0.],
[0., 0.]],
[[0., 0.],
[0....state=RandomState(MT19937) at 0xFFFF94887640,
weights_prior=array([[1., 1.],
[1., 1.],
[1., 1.]]))
X = array([[ 6448.11185593, 7479.8557005 ],
[29348.25281874, 29440.18051704],
[ 4001.16541306, 3066.972904... [ 4002.89789938, 3063.33866516],
[ 6449.52011589, 7476.97438223],
[ 6449.3008853 , 7477.29740985]])
def _fit_scaling(self, X):
frameprob = self._compute_likelihood(X)
> log_prob, fwdlattice, scaling_factors = _hmmc.forward_scaling(
self.startprob_, self.transmat_, frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:855: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
------------------------------ Captured log call -------------------------------
WARNING hmmlearn.base:base.py:514 Even though the 'startprob_' attribute is set, it will be overwritten during initialization because 'init_params' contains 's'
WARNING hmmlearn.base:base.py:514 Even though the 'transmat_' attribute is set, it will be overwritten during initialization because 'init_params' contains 't'
WARNING hmmlearn.base:base.py:514 Even though the 'weights_' attribute is set, it will be overwritten during initialization because 'init_params' contains 'w'
WARNING hmmlearn.base:base.py:514 Even though the 'means_' attribute is set, it will be overwritten during initialization because 'init_params' contains 'm'
WARNING hmmlearn.base:base.py:514 Even though the 'covars_' attribute is set, it will be overwritten during initialization because 'init_params' contains 'c'
WARNING hmmlearn.base:base.py:514 Even though the 'startprob_' attribute is set, it will be overwritten during initialization because 'init_params' contains 's'
WARNING hmmlearn.base:base.py:514 Even though the 'transmat_' attribute is set, it will be overwritten during initialization because 'init_params' contains 't'
WARNING hmmlearn.base:base.py:514 Even though the 'weights_' attribute is set, it will be overwritten during initialization because 'init_params' contains 'w'
WARNING hmmlearn.base:base.py:514 Even though the 'means_' attribute is set, it will be overwritten during initialization because 'init_params' contains 'm'
WARNING hmmlearn.base:base.py:514 Even though the 'covars_' attribute is set, it will be overwritten during initialization because 'init_params' contains 'c'
______________ TestGMMHMMWithTiedCovars.test_fit_sparse_data[log] ______________
self = <hmmlearn.tests.test_gmm_hmm_new.TestGMMHMMWithTiedCovars object at 0xffff94ad81f0>
implementation = 'log'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_fit_sparse_data(self, implementation):
n_samples = 1000
h = self.new_hmm(implementation)
h.means_ *= 1000 # this will put gaussians very far apart
X, _states = h.sample(n_samples)
# this should not raise
# "ValueError: array must not contain infs or NaNs"
h._init(X, [1000])
> h.fit(X)
hmmlearn/tests/test_gmm_hmm_new.py:158:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GMMHMM(covariance_type='tied',
covars_prior=array([[[0., 0.],
[0., 0.]],
[[0., 0.],
[0....state=RandomState(MT19937) at 0xFFFF94886440,
weights_prior=array([[1., 1.],
[1., 1.],
[1., 1.]]))
X = array([[ 6448.11185593, 7479.8557005 ],
[29348.25281874, 29440.18051704],
[ 4001.16541306, 3066.972904... [ 4002.89789938, 3063.33866516],
[ 6449.52011589, 7476.97438223],
[ 6449.3008853 , 7477.29740985]])
def _fit_log(self, X):
log_frameprob = self._compute_log_likelihood(X)
> log_prob, fwdlattice = _hmmc.forward_log(
self.startprob_, self.transmat_, log_frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:864: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
------------------------------ Captured log call -------------------------------
WARNING hmmlearn.base:base.py:514 Even though the 'startprob_' attribute is set, it will be overwritten during initialization because 'init_params' contains 's'
WARNING hmmlearn.base:base.py:514 Even though the 'transmat_' attribute is set, it will be overwritten during initialization because 'init_params' contains 't'
WARNING hmmlearn.base:base.py:514 Even though the 'weights_' attribute is set, it will be overwritten during initialization because 'init_params' contains 'w'
WARNING hmmlearn.base:base.py:514 Even though the 'means_' attribute is set, it will be overwritten during initialization because 'init_params' contains 'm'
WARNING hmmlearn.base:base.py:514 Even though the 'covars_' attribute is set, it will be overwritten during initialization because 'init_params' contains 'c'
WARNING hmmlearn.base:base.py:514 Even though the 'startprob_' attribute is set, it will be overwritten during initialization because 'init_params' contains 's'
WARNING hmmlearn.base:base.py:514 Even though the 'transmat_' attribute is set, it will be overwritten during initialization because 'init_params' contains 't'
WARNING hmmlearn.base:base.py:514 Even though the 'weights_' attribute is set, it will be overwritten during initialization because 'init_params' contains 'w'
WARNING hmmlearn.base:base.py:514 Even though the 'means_' attribute is set, it will be overwritten during initialization because 'init_params' contains 'm'
WARNING hmmlearn.base:base.py:514 Even though the 'covars_' attribute is set, it will be overwritten during initialization because 'init_params' contains 'c'
_______________ TestGMMHMMWithTiedCovars.test_criterion[scaling] _______________
self = <hmmlearn.tests.test_gmm_hmm_new.TestGMMHMMWithTiedCovars object at 0xffff94a857b0>
implementation = 'scaling'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_criterion(self, implementation):
random_state = check_random_state(2013)
m1 = self.new_hmm(implementation)
# Spread the means out to make this easier
m1.means_ *= 10
X, _ = m1.sample(4000, random_state=random_state)
aic = []
bic = []
ns = [2, 3, 4, 5]
for n in ns:
h = GMMHMM(n, n_mix=2, covariance_type=self.covariance_type,
random_state=random_state, implementation=implementation)
> h.fit(X)
hmmlearn/tests/test_gmm_hmm_new.py:194:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GMMHMM(covariance_type='tied',
covars_prior=array([[[0., 0.],
[0., 0.]],
[[0., 0.],
[0....2,
random_state=RandomState(MT19937) at 0xFFFF9488C140,
weights_prior=array([[1., 1.],
[1., 1.]]))
X = array([[ 39.3472143 , 31.96516627],
[239.4457031 , 193.9191015 ],
[292.652245 , 294.71067724],
...,
[ 66.25970996, 70.96753017],
[242.16534204, 192.32929203],
[243.78173717, 191.48640575]])
def _fit_scaling(self, X):
frameprob = self._compute_likelihood(X)
> log_prob, fwdlattice, scaling_factors = _hmmc.forward_scaling(
self.startprob_, self.transmat_, frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:855: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
_________________ TestGMMHMMWithTiedCovars.test_criterion[log] _________________
self = <hmmlearn.tests.test_gmm_hmm_new.TestGMMHMMWithTiedCovars object at 0xffff94a85860>
implementation = 'log'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_criterion(self, implementation):
random_state = check_random_state(2013)
m1 = self.new_hmm(implementation)
# Spread the means out to make this easier
m1.means_ *= 10
X, _ = m1.sample(4000, random_state=random_state)
aic = []
bic = []
ns = [2, 3, 4, 5]
for n in ns:
h = GMMHMM(n, n_mix=2, covariance_type=self.covariance_type,
random_state=random_state, implementation=implementation)
> h.fit(X)
hmmlearn/tests/test_gmm_hmm_new.py:194:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GMMHMM(covariance_type='tied',
covars_prior=array([[[0., 0.],
[0., 0.]],
[[0., 0.],
[0....2,
random_state=RandomState(MT19937) at 0xFFFF94885440,
weights_prior=array([[1., 1.],
[1., 1.]]))
X = array([[ 39.3472143 , 31.96516627],
[239.4457031 , 193.9191015 ],
[292.652245 , 294.71067724],
...,
[ 66.25970996, 70.96753017],
[242.16534204, 192.32929203],
[243.78173717, 191.48640575]])
def _fit_log(self, X):
log_frameprob = self._compute_log_likelihood(X)
> log_prob, fwdlattice = _hmmc.forward_log(
self.startprob_, self.transmat_, log_frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:864: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
_______ TestGMMHMMWithFullCovars.test_score_samples_and_decode[scaling] ________
self = <hmmlearn.tests.test_gmm_hmm_new.TestGMMHMMWithFullCovars object at 0xffff94aa4b50>
implementation = 'scaling'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_score_samples_and_decode(self, implementation):
n_samples = 1000
h = self.new_hmm(implementation)
X, states = h.sample(n_samples)
> _ll, posteriors = h.score_samples(X)
hmmlearn/tests/test_gmm_hmm_new.py:121:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:201: in score_samples
return self._score(X, lengths, compute_posteriors=True)
hmmlearn/base.py:244: in _score
return impl(
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GMMHMM(covariance_type='full',
covars_prior=array([[[[0., 0.],
[0., 0.]],
[[0., 0.],
...state=RandomState(MT19937) at 0xFFFF94886F40,
weights_prior=array([[1., 1.],
[1., 1.],
[1., 1.]]))
X = array([[ 5.79374863, 8.96492185],
[26.41560176, 18.9509068 ],
[15.10318903, 13.16898577],
...,
[ 8.84018693, 2.41627666],
[32.50086843, 27.24027875],
[ 4.04144414, 2.99516636]])
lengths = None
def _score_scaling(self, X, lengths=None, *, compute_posteriors):
log_prob = 0
sub_posteriors = [np.empty((0, self.n_components))]
for sub_X in _utils.split_X_lengths(X, lengths):
frameprob = self._compute_likelihood(sub_X)
> log_probij, fwdlattice, scaling_factors = _hmmc.forward_scaling(
self.startprob_, self.transmat_, frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:272: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
_________ TestGMMHMMWithFullCovars.test_score_samples_and_decode[log] __________
self = <hmmlearn.tests.test_gmm_hmm_new.TestGMMHMMWithFullCovars object at 0xffff94e9aa80>
implementation = 'log'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_score_samples_and_decode(self, implementation):
n_samples = 1000
h = self.new_hmm(implementation)
X, states = h.sample(n_samples)
> _ll, posteriors = h.score_samples(X)
hmmlearn/tests/test_gmm_hmm_new.py:121:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:201: in score_samples
return self._score(X, lengths, compute_posteriors=True)
hmmlearn/base.py:244: in _score
return impl(
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GMMHMM(covariance_type='full',
covars_prior=array([[[[0., 0.],
[0., 0.]],
[[0., 0.],
...state=RandomState(MT19937) at 0xFFFF94887D40,
weights_prior=array([[1., 1.],
[1., 1.],
[1., 1.]]))
X = array([[ 5.79374863, 8.96492185],
[26.41560176, 18.9509068 ],
[15.10318903, 13.16898577],
...,
[ 8.84018693, 2.41627666],
[32.50086843, 27.24027875],
[ 4.04144414, 2.99516636]])
lengths = None
def _score_log(self, X, lengths=None, *, compute_posteriors):
"""
Compute the log probability under the model, as well as posteriors if
*compute_posteriors* is True (otherwise, an empty array is returned
for the latter).
"""
log_prob = 0
sub_posteriors = [np.empty((0, self.n_components))]
for sub_X in _utils.split_X_lengths(X, lengths):
log_frameprob = self._compute_log_likelihood(sub_X)
> log_probij, fwdlattice = _hmmc.forward_log(
self.startprob_, self.transmat_, log_frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:257: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
__________________ TestGMMHMMWithFullCovars.test_fit[scaling] __________________
self = <hmmlearn.tests.test_gmm_hmm_new.TestGMMHMMWithFullCovars object at 0xffff94e9ab70>
implementation = 'scaling'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_fit(self, implementation):
n_iter = 5
n_samples = 1000
lengths = None
h = self.new_hmm(implementation)
X, _state_sequence = h.sample(n_samples)
# Mess up the parameters and see if we can re-learn them.
covs0, means0, priors0, trans0, weights0 = prep_params(
self.n_components, self.n_mix, self.n_features,
self.covariance_type, self.low, self.high,
np.random.RandomState(15)
)
h.covars_ = covs0 * 100
h.means_ = means0
h.startprob_ = priors0
h.transmat_ = trans0
h.weights_ = weights0
> assert_log_likelihood_increasing(h, X, lengths, n_iter)
hmmlearn/tests/test_gmm_hmm_new.py:146:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/tests/__init__.py:46: in assert_log_likelihood_increasing
h.fit(X, lengths=lengths)
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GMMHMM(covariance_type='full',
covars_prior=array([[[[0., 0.],
[0., 0.]],
[[0., 0.],
...state=RandomState(MT19937) at 0xFFFF94887140,
weights_prior=array([[1., 1.],
[1., 1.],
[1., 1.]]))
X = array([[ 5.79374863, 8.96492185],
[26.41560176, 18.9509068 ],
[15.10318903, 13.16898577],
...,
[ 8.84018693, 2.41627666],
[32.50086843, 27.24027875],
[ 4.04144414, 2.99516636]])
def _fit_scaling(self, X):
frameprob = self._compute_likelihood(X)
> log_prob, fwdlattice, scaling_factors = _hmmc.forward_scaling(
self.startprob_, self.transmat_, frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:855: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
____________________ TestGMMHMMWithFullCovars.test_fit[log] ____________________
self = <hmmlearn.tests.test_gmm_hmm_new.TestGMMHMMWithFullCovars object at 0xffff94a92270>
implementation = 'log'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_fit(self, implementation):
n_iter = 5
n_samples = 1000
lengths = None
h = self.new_hmm(implementation)
X, _state_sequence = h.sample(n_samples)
# Mess up the parameters and see if we can re-learn them.
covs0, means0, priors0, trans0, weights0 = prep_params(
self.n_components, self.n_mix, self.n_features,
self.covariance_type, self.low, self.high,
np.random.RandomState(15)
)
h.covars_ = covs0 * 100
h.means_ = means0
h.startprob_ = priors0
h.transmat_ = trans0
h.weights_ = weights0
> assert_log_likelihood_increasing(h, X, lengths, n_iter)
hmmlearn/tests/test_gmm_hmm_new.py:146:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/tests/__init__.py:46: in assert_log_likelihood_increasing
h.fit(X, lengths=lengths)
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GMMHMM(covariance_type='full',
covars_prior=array([[[[0., 0.],
[0., 0.]],
[[0., 0.],
...state=RandomState(MT19937) at 0xFFFF94884340,
weights_prior=array([[1., 1.],
[1., 1.],
[1., 1.]]))
X = array([[ 5.79374863, 8.96492185],
[26.41560176, 18.9509068 ],
[15.10318903, 13.16898577],
...,
[ 8.84018693, 2.41627666],
[32.50086843, 27.24027875],
[ 4.04144414, 2.99516636]])
def _fit_log(self, X):
log_frameprob = self._compute_log_likelihood(X)
> log_prob, fwdlattice = _hmmc.forward_log(
self.startprob_, self.transmat_, log_frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:864: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
____________ TestGMMHMMWithFullCovars.test_fit_sparse_data[scaling] ____________
self = <hmmlearn.tests.test_gmm_hmm_new.TestGMMHMMWithFullCovars object at 0xffff94a910f0>
implementation = 'scaling'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_fit_sparse_data(self, implementation):
n_samples = 1000
h = self.new_hmm(implementation)
h.means_ *= 1000 # this will put gaussians very far apart
X, _states = h.sample(n_samples)
# this should not raise
# "ValueError: array must not contain infs or NaNs"
h._init(X, [1000])
> h.fit(X)
hmmlearn/tests/test_gmm_hmm_new.py:158:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GMMHMM(covariance_type='full',
covars_prior=array([[[[0., 0.],
[0., 0.]],
[[0., 0.],
...state=RandomState(MT19937) at 0xFFFF94886440,
weights_prior=array([[1., 1.],
[1., 1.],
[1., 1.]]))
X = array([[ 6449.07573383, 7477.94058249],
[24146.71996085, 19276.07031622],
[17479.42387006, 13924.043632... [ 6452.12217213, 7471.39193731],
[29350.51157576, 29438.19823596],
[ 4000.78575575, 3067.80965369]])
def _fit_scaling(self, X):
frameprob = self._compute_likelihood(X)
> log_prob, fwdlattice, scaling_factors = _hmmc.forward_scaling(
self.startprob_, self.transmat_, frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:855: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
------------------------------ Captured log call -------------------------------
WARNING hmmlearn.base:base.py:514 Even though the 'startprob_' attribute is set, it will be overwritten during initialization because 'init_params' contains 's'
WARNING hmmlearn.base:base.py:514 Even though the 'transmat_' attribute is set, it will be overwritten during initialization because 'init_params' contains 't'
WARNING hmmlearn.base:base.py:514 Even though the 'weights_' attribute is set, it will be overwritten during initialization because 'init_params' contains 'w'
WARNING hmmlearn.base:base.py:514 Even though the 'means_' attribute is set, it will be overwritten during initialization because 'init_params' contains 'm'
WARNING hmmlearn.base:base.py:514 Even though the 'covars_' attribute is set, it will be overwritten during initialization because 'init_params' contains 'c'
WARNING hmmlearn.base:base.py:514 Even though the 'startprob_' attribute is set, it will be overwritten during initialization because 'init_params' contains 's'
WARNING hmmlearn.base:base.py:514 Even though the 'transmat_' attribute is set, it will be overwritten during initialization because 'init_params' contains 't'
WARNING hmmlearn.base:base.py:514 Even though the 'weights_' attribute is set, it will be overwritten during initialization because 'init_params' contains 'w'
WARNING hmmlearn.base:base.py:514 Even though the 'means_' attribute is set, it will be overwritten during initialization because 'init_params' contains 'm'
WARNING hmmlearn.base:base.py:514 Even though the 'covars_' attribute is set, it will be overwritten during initialization because 'init_params' contains 'c'
______________ TestGMMHMMWithFullCovars.test_fit_sparse_data[log] ______________
self = <hmmlearn.tests.test_gmm_hmm_new.TestGMMHMMWithFullCovars object at 0xffff94adac30>
implementation = 'log'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_fit_sparse_data(self, implementation):
n_samples = 1000
h = self.new_hmm(implementation)
h.means_ *= 1000 # this will put gaussians very far apart
X, _states = h.sample(n_samples)
# this should not raise
# "ValueError: array must not contain infs or NaNs"
h._init(X, [1000])
> h.fit(X)
hmmlearn/tests/test_gmm_hmm_new.py:158:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GMMHMM(covariance_type='full',
covars_prior=array([[[[0., 0.],
[0., 0.]],
[[0., 0.],
...state=RandomState(MT19937) at 0xFFFF9488F040,
weights_prior=array([[1., 1.],
[1., 1.],
[1., 1.]]))
X = array([[ 6449.07573383, 7477.94058249],
[24146.71996085, 19276.07031622],
[17479.42387006, 13924.043632... [ 6452.12217213, 7471.39193731],
[29350.51157576, 29438.19823596],
[ 4000.78575575, 3067.80965369]])
def _fit_log(self, X):
log_frameprob = self._compute_log_likelihood(X)
> log_prob, fwdlattice = _hmmc.forward_log(
self.startprob_, self.transmat_, log_frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:864: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
------------------------------ Captured log call -------------------------------
WARNING hmmlearn.base:base.py:514 Even though the 'startprob_' attribute is set, it will be overwritten during initialization because 'init_params' contains 's'
WARNING hmmlearn.base:base.py:514 Even though the 'transmat_' attribute is set, it will be overwritten during initialization because 'init_params' contains 't'
WARNING hmmlearn.base:base.py:514 Even though the 'weights_' attribute is set, it will be overwritten during initialization because 'init_params' contains 'w'
WARNING hmmlearn.base:base.py:514 Even though the 'means_' attribute is set, it will be overwritten during initialization because 'init_params' contains 'm'
WARNING hmmlearn.base:base.py:514 Even though the 'covars_' attribute is set, it will be overwritten during initialization because 'init_params' contains 'c'
WARNING hmmlearn.base:base.py:514 Even though the 'startprob_' attribute is set, it will be overwritten during initialization because 'init_params' contains 's'
WARNING hmmlearn.base:base.py:514 Even though the 'transmat_' attribute is set, it will be overwritten during initialization because 'init_params' contains 't'
WARNING hmmlearn.base:base.py:514 Even though the 'weights_' attribute is set, it will be overwritten during initialization because 'init_params' contains 'w'
WARNING hmmlearn.base:base.py:514 Even though the 'means_' attribute is set, it will be overwritten during initialization because 'init_params' contains 'm'
WARNING hmmlearn.base:base.py:514 Even though the 'covars_' attribute is set, it will be overwritten during initialization because 'init_params' contains 'c'
_______________ TestGMMHMMWithFullCovars.test_criterion[scaling] _______________
self = <hmmlearn.tests.test_gmm_hmm_new.TestGMMHMMWithFullCovars object at 0xffff94a84aa0>
implementation = 'scaling'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_criterion(self, implementation):
random_state = check_random_state(2013)
m1 = self.new_hmm(implementation)
# Spread the means out to make this easier
m1.means_ *= 10
X, _ = m1.sample(4000, random_state=random_state)
aic = []
bic = []
ns = [2, 3, 4, 5]
for n in ns:
h = GMMHMM(n, n_mix=2, covariance_type=self.covariance_type,
random_state=random_state, implementation=implementation)
> h.fit(X)
hmmlearn/tests/test_gmm_hmm_new.py:194:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GMMHMM(covariance_type='full',
covars_prior=array([[[[0., 0.],
[0., 0.]],
[[0., 0.],
...2,
random_state=RandomState(MT19937) at 0xFFFF94791E40,
weights_prior=array([[1., 1.],
[1., 1.]]))
X = array([[ 38.38394272, 31.47161592],
[173.50537726, 139.44913251],
[345.97120338, 377.84504473],
...,
[ 66.25970996, 70.96753017],
[175.32456372, 138.9877489 ],
[243.56689381, 192.53130439]])
def _fit_scaling(self, X):
frameprob = self._compute_likelihood(X)
> log_prob, fwdlattice, scaling_factors = _hmmc.forward_scaling(
self.startprob_, self.transmat_, frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:855: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
_________________ TestGMMHMMWithFullCovars.test_criterion[log] _________________
self = <hmmlearn.tests.test_gmm_hmm_new.TestGMMHMMWithFullCovars object at 0xffff94a84890>
implementation = 'log'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_criterion(self, implementation):
random_state = check_random_state(2013)
m1 = self.new_hmm(implementation)
# Spread the means out to make this easier
m1.means_ *= 10
X, _ = m1.sample(4000, random_state=random_state)
aic = []
bic = []
ns = [2, 3, 4, 5]
for n in ns:
h = GMMHMM(n, n_mix=2, covariance_type=self.covariance_type,
random_state=random_state, implementation=implementation)
> h.fit(X)
hmmlearn/tests/test_gmm_hmm_new.py:194:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GMMHMM(covariance_type='full',
covars_prior=array([[[[0., 0.],
[0., 0.]],
[[0., 0.],
...2,
random_state=RandomState(MT19937) at 0xFFFF94AA7040,
weights_prior=array([[1., 1.],
[1., 1.]]))
X = array([[ 38.38394272, 31.47161592],
[173.50537726, 139.44913251],
[345.97120338, 377.84504473],
...,
[ 66.25970996, 70.96753017],
[175.32456372, 138.9877489 ],
[243.56689381, 192.53130439]])
def _fit_log(self, X):
log_frameprob = self._compute_log_likelihood(X)
> log_prob, fwdlattice = _hmmc.forward_log(
self.startprob_, self.transmat_, log_frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:864: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
__________________ TestGMMHMM_KmeansInit.test_kmeans[scaling] __________________
self = <hmmlearn.tests.test_gmm_hmm_new.TestGMMHMM_KmeansInit object at 0xffff94af5bd0>
implementation = 'scaling'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_kmeans(self, implementation):
# Generate two isolated cluster.
# The second cluster has no. of points less than n_mix.
np.random.seed(0)
data1 = np.random.uniform(low=0, high=1, size=(100, 2))
data2 = np.random.uniform(low=5, high=6, size=(5, 2))
data = np.r_[data1, data2]
model = GMMHMM(n_components=2, n_mix=10, n_iter=5,
implementation=implementation)
> model.fit(data) # _init() should not fail here
hmmlearn/tests/test_gmm_hmm_new.py:232:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GMMHMM(covars_prior=array([[[-1.5, -1.5],
[-1.5, -1.5],
[-1.5, -1.5],
[-1.5, -1.5],
[-... weights_prior=array([[1., 1., 1., 1., 1., 1., 1., 1., 1., 1.],
[1., 1., 1., 1., 1., 1., 1., 1., 1., 1.]]))
X = array([[5.48813504e-01, 7.15189366e-01],
[6.02763376e-01, 5.44883183e-01],
[4.23654799e-01, 6.45894113e-... [5.02467873e+00, 5.06724963e+00],
[5.67939277e+00, 5.45369684e+00],
[5.53657921e+00, 5.89667129e+00]])
def _fit_scaling(self, X):
frameprob = self._compute_likelihood(X)
> log_prob, fwdlattice, scaling_factors = _hmmc.forward_scaling(
self.startprob_, self.transmat_, frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:855: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
____________________ TestGMMHMM_KmeansInit.test_kmeans[log] ____________________
self = <hmmlearn.tests.test_gmm_hmm_new.TestGMMHMM_KmeansInit object at 0xffff94af5d10>
implementation = 'log'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_kmeans(self, implementation):
# Generate two isolated cluster.
# The second cluster has no. of points less than n_mix.
np.random.seed(0)
data1 = np.random.uniform(low=0, high=1, size=(100, 2))
data2 = np.random.uniform(low=5, high=6, size=(5, 2))
data = np.r_[data1, data2]
model = GMMHMM(n_components=2, n_mix=10, n_iter=5,
implementation=implementation)
> model.fit(data) # _init() should not fail here
hmmlearn/tests/test_gmm_hmm_new.py:232:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GMMHMM(covars_prior=array([[[-1.5, -1.5],
[-1.5, -1.5],
[-1.5, -1.5],
[-1.5, -1.5],
[-... weights_prior=array([[1., 1., 1., 1., 1., 1., 1., 1., 1., 1.],
[1., 1., 1., 1., 1., 1., 1., 1., 1., 1.]]))
X = array([[5.48813504e-01, 7.15189366e-01],
[6.02763376e-01, 5.44883183e-01],
[4.23654799e-01, 6.45894113e-... [5.02467873e+00, 5.06724963e+00],
[5.67939277e+00, 5.45369684e+00],
[5.53657921e+00, 5.89667129e+00]])
def _fit_log(self, X):
log_frameprob = self._compute_log_likelihood(X)
> log_prob, fwdlattice = _hmmc.forward_log(
self.startprob_, self.transmat_, log_frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:864: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
_________________ TestGMMHMM_MultiSequence.test_chunked[diag] __________________
sellf = <hmmlearn.tests.test_gmm_hmm_new.TestGMMHMM_MultiSequence object at 0xffff94af5e50>
covtype = 'diag', init_params = 'mcw'
@pytest.mark.parametrize("covtype",
["diag", "spherical", "tied", "full"])
def test_chunked(sellf, covtype, init_params='mcw'):
np.random.seed(0)
gmm = create_random_gmm(3, 2, covariance_type=covtype, prng=0)
gmm.covariances_ = gmm.covars_
data = gmm.sample(n_samples=1000)[0]
model1 = GMMHMM(n_components=3, n_mix=2, covariance_type=covtype,
random_state=1, init_params=init_params)
model2 = GMMHMM(n_components=3, n_mix=2, covariance_type=covtype,
random_state=1, init_params=init_params)
# don't use random parameters for testing
init = 1. / model1.n_components
for model in (model1, model2):
model.startprob_ = np.full(model.n_components, init)
model.transmat_ = \
np.full((model.n_components, model.n_components), init)
> model1.fit(data)
hmmlearn/tests/test_gmm_hmm_new.py:259:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GMMHMM(covars_prior=array([[[-1.5, -1.5],
[-1.5, -1.5]],
[[-1.5, -1.5],
[-1.5, -1.5]],
... n_components=3, n_mix=2, random_state=1,
weights_prior=array([[1., 1.],
[1., 1.],
[1., 1.]]))
X = array([[-19.97769034, -16.75056455],
[-19.88212945, -16.97913043],
[-19.93125386, -16.94276853],
...,
[-11.01150478, -1.11584774],
[-11.10973308, -1.07914205],
[-10.8998337 , -0.84707255]])
def _fit_log(self, X):
log_frameprob = self._compute_log_likelihood(X)
> log_prob, fwdlattice = _hmmc.forward_log(
self.startprob_, self.transmat_, log_frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:864: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
_______________ TestGMMHMM_MultiSequence.test_chunked[spherical] _______________
sellf = <hmmlearn.tests.test_gmm_hmm_new.TestGMMHMM_MultiSequence object at 0xffff94af5f90>
covtype = 'spherical', init_params = 'mcw'
@pytest.mark.parametrize("covtype",
["diag", "spherical", "tied", "full"])
def test_chunked(sellf, covtype, init_params='mcw'):
np.random.seed(0)
gmm = create_random_gmm(3, 2, covariance_type=covtype, prng=0)
gmm.covariances_ = gmm.covars_
data = gmm.sample(n_samples=1000)[0]
model1 = GMMHMM(n_components=3, n_mix=2, covariance_type=covtype,
random_state=1, init_params=init_params)
model2 = GMMHMM(n_components=3, n_mix=2, covariance_type=covtype,
random_state=1, init_params=init_params)
# don't use random parameters for testing
init = 1. / model1.n_components
for model in (model1, model2):
model.startprob_ = np.full(model.n_components, init)
model.transmat_ = \
np.full((model.n_components, model.n_components), init)
> model1.fit(data)
hmmlearn/tests/test_gmm_hmm_new.py:259:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GMMHMM(covariance_type='spherical',
covars_prior=array([[-2., -2.],
[-2., -2.],
[-2., -2.]]),
... n_components=3, n_mix=2, random_state=1,
weights_prior=array([[1., 1.],
[1., 1.],
[1., 1.]]))
X = array([[-19.80390185, -17.07835084],
[-19.60579587, -16.83260239],
[-19.92498908, -16.91030194],
...,
[-11.17392582, -1.26966434],
[-11.14220209, -1.03192961],
[-11.14814372, -0.99298261]])
def _fit_log(self, X):
log_frameprob = self._compute_log_likelihood(X)
> log_prob, fwdlattice = _hmmc.forward_log(
self.startprob_, self.transmat_, log_frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:864: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
_________________ TestGMMHMM_MultiSequence.test_chunked[tied] __________________
sellf = <hmmlearn.tests.test_gmm_hmm_new.TestGMMHMM_MultiSequence object at 0xffff94f568b0>
covtype = 'tied', init_params = 'mcw'
@pytest.mark.parametrize("covtype",
["diag", "spherical", "tied", "full"])
def test_chunked(sellf, covtype, init_params='mcw'):
np.random.seed(0)
gmm = create_random_gmm(3, 2, covariance_type=covtype, prng=0)
gmm.covariances_ = gmm.covars_
data = gmm.sample(n_samples=1000)[0]
model1 = GMMHMM(n_components=3, n_mix=2, covariance_type=covtype,
random_state=1, init_params=init_params)
model2 = GMMHMM(n_components=3, n_mix=2, covariance_type=covtype,
random_state=1, init_params=init_params)
# don't use random parameters for testing
init = 1. / model1.n_components
for model in (model1, model2):
model.startprob_ = np.full(model.n_components, init)
model.transmat_ = \
np.full((model.n_components, model.n_components), init)
> model1.fit(data)
hmmlearn/tests/test_gmm_hmm_new.py:259:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GMMHMM(covariance_type='tied',
covars_prior=array([[[0., 0.],
[0., 0.]],
[[0., 0.],
[0.... n_components=3, n_mix=2, random_state=1,
weights_prior=array([[1., 1.],
[1., 1.],
[1., 1.]]))
X = array([[-20.22761614, -15.84567719],
[-21.23619726, -16.89659692],
[-20.71982474, -16.73140459],
...,
[-10.87180439, -1.55878592],
[ -9.74956046, -1.38825752],
[-12.13924424, -0.25692342]])
def _fit_log(self, X):
log_frameprob = self._compute_log_likelihood(X)
> log_prob, fwdlattice = _hmmc.forward_log(
self.startprob_, self.transmat_, log_frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:864: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
_________________ TestGMMHMM_MultiSequence.test_chunked[full] __________________
sellf = <hmmlearn.tests.test_gmm_hmm_new.TestGMMHMM_MultiSequence object at 0xffff94f569e0>
covtype = 'full', init_params = 'mcw'
@pytest.mark.parametrize("covtype",
["diag", "spherical", "tied", "full"])
def test_chunked(sellf, covtype, init_params='mcw'):
np.random.seed(0)
gmm = create_random_gmm(3, 2, covariance_type=covtype, prng=0)
gmm.covariances_ = gmm.covars_
data = gmm.sample(n_samples=1000)[0]
model1 = GMMHMM(n_components=3, n_mix=2, covariance_type=covtype,
random_state=1, init_params=init_params)
model2 = GMMHMM(n_components=3, n_mix=2, covariance_type=covtype,
random_state=1, init_params=init_params)
# don't use random parameters for testing
init = 1. / model1.n_components
for model in (model1, model2):
model.startprob_ = np.full(model.n_components, init)
model.transmat_ = \
np.full((model.n_components, model.n_components), init)
> model1.fit(data)
hmmlearn/tests/test_gmm_hmm_new.py:259:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GMMHMM(covariance_type='full',
covars_prior=array([[[[0., 0.],
[0., 0.]],
[[0., 0.],
... n_components=3, n_mix=2, random_state=1,
weights_prior=array([[1., 1.],
[1., 1.],
[1., 1.]]))
X = array([[-20.51255292, -17.67431134],
[-15.84831228, -16.50504373],
[-21.40806672, -17.58054428],
...,
[-12.05683236, -0.58197627],
[-11.42658201, -1.42127957],
[-12.15481108, -0.76401566]])
def _fit_log(self, X):
log_frameprob = self._compute_log_likelihood(X)
> log_prob, fwdlattice = _hmmc.forward_log(
self.startprob_, self.transmat_, log_frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:864: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
________________ TestMultinomialHMM.test_score_samples[scaling] ________________
self = <hmmlearn.tests.test_multinomial_hmm.TestMultinomialHMM object at 0xffff94f56fd0>
implementation = 'scaling'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_score_samples(self, implementation):
X = np.array([
[1, 1, 3, 0],
[3, 1, 1, 0],
[3, 0, 2, 0],
[2, 2, 0, 1],
[2, 2, 0, 1],
[0, 1, 1, 3],
[1, 0, 3, 1],
[2, 0, 1, 2],
[0, 2, 1, 2],
[1, 0, 1, 3],
])
n_samples = X.shape[0]
h = self.new_hmm(implementation)
> ll, posteriors = h.score_samples(X)
hmmlearn/tests/test_multinomial_hmm.py:53:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:201: in score_samples
return self._score(X, lengths, compute_posteriors=True)
hmmlearn/base.py:244: in _score
return impl(
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = MultinomialHMM(implementation='scaling', n_components=2, n_trials=5)
X = array([[1, 1, 3, 0],
[3, 1, 1, 0],
[3, 0, 2, 0],
[2, 2, 0, 1],
[2, 2, 0, 1],
[0, 1, 1, 3],
[1, 0, 3, 1],
[2, 0, 1, 2],
[0, 2, 1, 2],
[1, 0, 1, 3]])
lengths = None
def _score_scaling(self, X, lengths=None, *, compute_posteriors):
log_prob = 0
sub_posteriors = [np.empty((0, self.n_components))]
for sub_X in _utils.split_X_lengths(X, lengths):
frameprob = self._compute_likelihood(sub_X)
> log_probij, fwdlattice, scaling_factors = _hmmc.forward_scaling(
self.startprob_, self.transmat_, frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:272: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
------------------------------ Captured log call -------------------------------
WARNING hmmlearn.hmm:hmm.py:883 MultinomialHMM has undergone major changes. The previous version was implementing a CategoricalHMM (a special case of MultinomialHMM). This new implementation follows the standard definition for a Multinomial distribution (e.g. as in https://en.wikipedia.org/wiki/Multinomial_distribution). See these issues for details:
https://github.com/hmmlearn/hmmlearn/issues/335
https://github.com/hmmlearn/hmmlearn/issues/340
__________________ TestMultinomialHMM.test_score_samples[log] __________________
self = <hmmlearn.tests.test_multinomial_hmm.TestMultinomialHMM object at 0xffff94f57100>
implementation = 'log'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_score_samples(self, implementation):
X = np.array([
[1, 1, 3, 0],
[3, 1, 1, 0],
[3, 0, 2, 0],
[2, 2, 0, 1],
[2, 2, 0, 1],
[0, 1, 1, 3],
[1, 0, 3, 1],
[2, 0, 1, 2],
[0, 2, 1, 2],
[1, 0, 1, 3],
])
n_samples = X.shape[0]
h = self.new_hmm(implementation)
> ll, posteriors = h.score_samples(X)
hmmlearn/tests/test_multinomial_hmm.py:53:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:201: in score_samples
return self._score(X, lengths, compute_posteriors=True)
hmmlearn/base.py:244: in _score
return impl(
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = MultinomialHMM(n_components=2, n_trials=5)
X = array([[1, 1, 3, 0],
[3, 1, 1, 0],
[3, 0, 2, 0],
[2, 2, 0, 1],
[2, 2, 0, 1],
[0, 1, 1, 3],
[1, 0, 3, 1],
[2, 0, 1, 2],
[0, 2, 1, 2],
[1, 0, 1, 3]])
lengths = None
def _score_log(self, X, lengths=None, *, compute_posteriors):
"""
Compute the log probability under the model, as well as posteriors if
*compute_posteriors* is True (otherwise, an empty array is returned
for the latter).
"""
log_prob = 0
sub_posteriors = [np.empty((0, self.n_components))]
for sub_X in _utils.split_X_lengths(X, lengths):
log_frameprob = self._compute_log_likelihood(sub_X)
> log_probij, fwdlattice = _hmmc.forward_log(
self.startprob_, self.transmat_, log_frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:257: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
------------------------------ Captured log call -------------------------------
WARNING hmmlearn.hmm:hmm.py:883 MultinomialHMM has undergone major changes. The previous version was implementing a CategoricalHMM (a special case of MultinomialHMM). This new implementation follows the standard definition for a Multinomial distribution (e.g. as in https://en.wikipedia.org/wiki/Multinomial_distribution). See these issues for details:
https://github.com/hmmlearn/hmmlearn/issues/335
https://github.com/hmmlearn/hmmlearn/issues/340
_____________________ TestMultinomialHMM.test_fit[scaling] _____________________
self = <hmmlearn.tests.test_multinomial_hmm.TestMultinomialHMM object at 0xffff94f02ad0>
implementation = 'scaling', params = 'ste', n_iter = 5
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_fit(self, implementation, params='ste', n_iter=5):
h = self.new_hmm(implementation)
h.params = params
lengths = np.array([10] * 10)
X, _state_sequence = h.sample(lengths.sum())
# Mess up the parameters and see if we can re-learn them.
h.startprob_ = normalized(np.random.random(self.n_components))
h.transmat_ = normalized(
np.random.random((self.n_components, self.n_components)),
axis=1)
h.emissionprob_ = normalized(
np.random.random((self.n_components, self.n_features)),
axis=1)
# Also mess up trial counts.
h.n_trials = None
X[::2] *= 2
> assert_log_likelihood_increasing(h, X, lengths, n_iter)
hmmlearn/tests/test_multinomial_hmm.py:92:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/tests/__init__.py:46: in assert_log_likelihood_increasing
h.fit(X, lengths=lengths)
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = MultinomialHMM(implementation='scaling', init_params='', n_components=2,
n_iter=1,
n_tri..., 10, 5, 10, 5, 10, 5, 10, 5, 10, 5, 10, 5]),
random_state=RandomState(MT19937) at 0xFFFF9B17F840)
X = array([[4, 6, 0, 0],
[3, 1, 0, 1],
[2, 2, 6, 0],
[0, 2, 3, 0],
[0, 0, 4, 6],
[3, 0, 0, 2],
[2, 0, 4, 4],
[1, 0, 2, 2],
[2, 4, 0, 4],
[3, 2, 0, 0]])
def _fit_scaling(self, X):
frameprob = self._compute_likelihood(X)
> log_prob, fwdlattice, scaling_factors = _hmmc.forward_scaling(
self.startprob_, self.transmat_, frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:855: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
------------------------------ Captured log call -------------------------------
WARNING hmmlearn.hmm:hmm.py:883 MultinomialHMM has undergone major changes. The previous version was implementing a CategoricalHMM (a special case of MultinomialHMM). This new implementation follows the standard definition for a Multinomial distribution (e.g. as in https://en.wikipedia.org/wiki/Multinomial_distribution). See these issues for details:
https://github.com/hmmlearn/hmmlearn/issues/335
https://github.com/hmmlearn/hmmlearn/issues/340
_______________________ TestMultinomialHMM.test_fit[log] _______________________
self = <hmmlearn.tests.test_multinomial_hmm.TestMultinomialHMM object at 0xffff94aa4c50>
implementation = 'log', params = 'ste', n_iter = 5
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_fit(self, implementation, params='ste', n_iter=5):
h = self.new_hmm(implementation)
h.params = params
lengths = np.array([10] * 10)
X, _state_sequence = h.sample(lengths.sum())
# Mess up the parameters and see if we can re-learn them.
h.startprob_ = normalized(np.random.random(self.n_components))
h.transmat_ = normalized(
np.random.random((self.n_components, self.n_components)),
axis=1)
h.emissionprob_ = normalized(
np.random.random((self.n_components, self.n_features)),
axis=1)
# Also mess up trial counts.
h.n_trials = None
X[::2] *= 2
> assert_log_likelihood_increasing(h, X, lengths, n_iter)
hmmlearn/tests/test_multinomial_hmm.py:92:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/tests/__init__.py:46: in assert_log_likelihood_increasing
h.fit(X, lengths=lengths)
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = MultinomialHMM(init_params='', n_components=2, n_iter=1,
n_trials=array([10, 5, 10, 5, 10, 5, 10, 5..., 10, 5, 10, 5, 10, 5, 10, 5, 10, 5, 10, 5]),
random_state=RandomState(MT19937) at 0xFFFF9B17F840)
X = array([[0, 0, 6, 4],
[4, 0, 1, 0],
[8, 2, 0, 0],
[2, 2, 0, 1],
[8, 2, 0, 0],
[1, 2, 1, 1],
[2, 4, 0, 4],
[2, 2, 0, 1],
[6, 2, 2, 0],
[0, 1, 2, 2]])
def _fit_log(self, X):
log_frameprob = self._compute_log_likelihood(X)
> log_prob, fwdlattice = _hmmc.forward_log(
self.startprob_, self.transmat_, log_frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:864: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
------------------------------ Captured log call -------------------------------
WARNING hmmlearn.hmm:hmm.py:883 MultinomialHMM has undergone major changes. The previous version was implementing a CategoricalHMM (a special case of MultinomialHMM). This new implementation follows the standard definition for a Multinomial distribution (e.g. as in https://en.wikipedia.org/wiki/Multinomial_distribution). See these issues for details:
https://github.com/hmmlearn/hmmlearn/issues/335
https://github.com/hmmlearn/hmmlearn/issues/340
______________ TestMultinomialHMM.test_fit_emissionprob[scaling] _______________
self = <hmmlearn.tests.test_multinomial_hmm.TestMultinomialHMM object at 0xffff94aa4e50>
implementation = 'scaling'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_fit_emissionprob(self, implementation):
> self.test_fit(implementation, 'e')
hmmlearn/tests/test_multinomial_hmm.py:96:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/tests/test_multinomial_hmm.py:92: in test_fit
assert_log_likelihood_increasing(h, X, lengths, n_iter)
hmmlearn/tests/__init__.py:46: in assert_log_likelihood_increasing
h.fit(X, lengths=lengths)
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = MultinomialHMM(implementation='scaling', init_params='', n_components=2,
n_iter=1,
n_tri..., 5, 10, 5, 10, 5, 10, 5, 10, 5]),
params='e', random_state=RandomState(MT19937) at 0xFFFF9B17F840)
X = array([[0, 6, 4, 0],
[0, 1, 2, 2],
[0, 0, 6, 4],
[0, 2, 1, 2],
[6, 0, 2, 2],
[1, 3, 0, 1],
[8, 2, 0, 0],
[3, 2, 0, 0],
[6, 4, 0, 0],
[5, 0, 0, 0]])
def _fit_scaling(self, X):
frameprob = self._compute_likelihood(X)
> log_prob, fwdlattice, scaling_factors = _hmmc.forward_scaling(
self.startprob_, self.transmat_, frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:855: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
------------------------------ Captured log call -------------------------------
WARNING hmmlearn.hmm:hmm.py:883 MultinomialHMM has undergone major changes. The previous version was implementing a CategoricalHMM (a special case of MultinomialHMM). This new implementation follows the standard definition for a Multinomial distribution (e.g. as in https://en.wikipedia.org/wiki/Multinomial_distribution). See these issues for details:
https://github.com/hmmlearn/hmmlearn/issues/335
https://github.com/hmmlearn/hmmlearn/issues/340
________________ TestMultinomialHMM.test_fit_emissionprob[log] _________________
self = <hmmlearn.tests.test_multinomial_hmm.TestMultinomialHMM object at 0xffff94e9af30>
implementation = 'log'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_fit_emissionprob(self, implementation):
> self.test_fit(implementation, 'e')
hmmlearn/tests/test_multinomial_hmm.py:96:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/tests/test_multinomial_hmm.py:92: in test_fit
assert_log_likelihood_increasing(h, X, lengths, n_iter)
hmmlearn/tests/__init__.py:46: in assert_log_likelihood_increasing
h.fit(X, lengths=lengths)
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = MultinomialHMM(init_params='', n_components=2, n_iter=1,
n_trials=array([10, 5, 10, 5, 10, 5, 10, 5..., 5, 10, 5, 10, 5, 10, 5, 10, 5]),
params='e', random_state=RandomState(MT19937) at 0xFFFF9B17F840)
X = array([[6, 4, 0, 0],
[4, 1, 0, 0],
[4, 2, 2, 2],
[1, 2, 1, 1],
[2, 0, 4, 4],
[0, 0, 5, 0],
[6, 2, 0, 2],
[3, 2, 0, 0],
[0, 0, 6, 4],
[0, 0, 1, 4]])
def _fit_log(self, X):
log_frameprob = self._compute_log_likelihood(X)
> log_prob, fwdlattice = _hmmc.forward_log(
self.startprob_, self.transmat_, log_frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:864: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
------------------------------ Captured log call -------------------------------
WARNING hmmlearn.hmm:hmm.py:883 MultinomialHMM has undergone major changes. The previous version was implementing a CategoricalHMM (a special case of MultinomialHMM). This new implementation follows the standard definition for a Multinomial distribution (e.g. as in https://en.wikipedia.org/wiki/Multinomial_distribution). See these issues for details:
https://github.com/hmmlearn/hmmlearn/issues/335
https://github.com/hmmlearn/hmmlearn/issues/340
________________ TestMultinomialHMM.test_fit_with_init[scaling] ________________
self = <hmmlearn.tests.test_multinomial_hmm.TestMultinomialHMM object at 0xffff94e9b020>
implementation = 'scaling', params = 'ste', n_iter = 5
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_fit_with_init(self, implementation, params='ste', n_iter=5):
lengths = [10] * 10
h = self.new_hmm(implementation)
X, _state_sequence = h.sample(sum(lengths))
# use init_function to initialize paramerters
h = hmm.MultinomialHMM(
n_components=self.n_components, n_trials=self.n_trials,
params=params, init_params=params)
h._init(X, lengths)
> assert_log_likelihood_increasing(h, X, lengths, n_iter)
hmmlearn/tests/test_multinomial_hmm.py:110:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/tests/__init__.py:46: in assert_log_likelihood_increasing
h.fit(X, lengths=lengths)
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = MultinomialHMM(init_params='', n_components=2, n_iter=1, n_trials=5,
random_state=RandomState(MT19937) at 0xFFFF9B17F840)
X = array([[0, 0, 3, 2],
[1, 2, 1, 1],
[3, 1, 1, 0],
[4, 1, 0, 0],
[1, 0, 2, 2],
[0, 0, 3, 2],
[1, 1, 3, 0],
[0, 1, 1, 3],
[3, 0, 1, 1],
[0, 0, 3, 2]])
def _fit_log(self, X):
log_frameprob = self._compute_log_likelihood(X)
> log_prob, fwdlattice = _hmmc.forward_log(
self.startprob_, self.transmat_, log_frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:864: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
------------------------------ Captured log call -------------------------------
WARNING hmmlearn.hmm:hmm.py:883 MultinomialHMM has undergone major changes. The previous version was implementing a CategoricalHMM (a special case of MultinomialHMM). This new implementation follows the standard definition for a Multinomial distribution (e.g. as in https://en.wikipedia.org/wiki/Multinomial_distribution). See these issues for details:
https://github.com/hmmlearn/hmmlearn/issues/335
https://github.com/hmmlearn/hmmlearn/issues/340
WARNING hmmlearn.hmm:hmm.py:883 MultinomialHMM has undergone major changes. The previous version was implementing a CategoricalHMM (a special case of MultinomialHMM). This new implementation follows the standard definition for a Multinomial distribution (e.g. as in https://en.wikipedia.org/wiki/Multinomial_distribution). See these issues for details:
https://github.com/hmmlearn/hmmlearn/issues/335
https://github.com/hmmlearn/hmmlearn/issues/340
__________________ TestMultinomialHMM.test_fit_with_init[log] __________________
self = <hmmlearn.tests.test_multinomial_hmm.TestMultinomialHMM object at 0xffff94a6e7b0>
implementation = 'log', params = 'ste', n_iter = 5
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_fit_with_init(self, implementation, params='ste', n_iter=5):
lengths = [10] * 10
h = self.new_hmm(implementation)
X, _state_sequence = h.sample(sum(lengths))
# use init_function to initialize paramerters
h = hmm.MultinomialHMM(
n_components=self.n_components, n_trials=self.n_trials,
params=params, init_params=params)
h._init(X, lengths)
> assert_log_likelihood_increasing(h, X, lengths, n_iter)
hmmlearn/tests/test_multinomial_hmm.py:110:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/tests/__init__.py:46: in assert_log_likelihood_increasing
h.fit(X, lengths=lengths)
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = MultinomialHMM(init_params='', n_components=2, n_iter=1, n_trials=5,
random_state=RandomState(MT19937) at 0xFFFF9B17F840)
X = array([[4, 0, 0, 1],
[0, 1, 2, 2],
[0, 1, 0, 4],
[3, 1, 1, 0],
[0, 0, 1, 4],
[0, 1, 2, 2],
[1, 0, 2, 2],
[0, 0, 4, 1],
[0, 1, 3, 1],
[3, 2, 0, 0]])
def _fit_log(self, X):
log_frameprob = self._compute_log_likelihood(X)
> log_prob, fwdlattice = _hmmc.forward_log(
self.startprob_, self.transmat_, log_frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:864: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
------------------------------ Captured log call -------------------------------
WARNING hmmlearn.hmm:hmm.py:883 MultinomialHMM has undergone major changes. The previous version was implementing a CategoricalHMM (a special case of MultinomialHMM). This new implementation follows the standard definition for a Multinomial distribution (e.g. as in https://en.wikipedia.org/wiki/Multinomial_distribution). See these issues for details:
https://github.com/hmmlearn/hmmlearn/issues/335
https://github.com/hmmlearn/hmmlearn/issues/340
WARNING hmmlearn.hmm:hmm.py:883 MultinomialHMM has undergone major changes. The previous version was implementing a CategoricalHMM (a special case of MultinomialHMM). This new implementation follows the standard definition for a Multinomial distribution (e.g. as in https://en.wikipedia.org/wiki/Multinomial_distribution). See these issues for details:
https://github.com/hmmlearn/hmmlearn/issues/335
https://github.com/hmmlearn/hmmlearn/issues/340
________ TestMultinomialHMM.test_compare_with_categorical_hmm[scaling] _________
self = <hmmlearn.tests.test_multinomial_hmm.TestMultinomialHMM object at 0xffff94a89310>
implementation = 'scaling'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_compare_with_categorical_hmm(self, implementation):
n_components = 2 # ['Rainy', 'Sunny']
n_features = 3 # ['walk', 'shop', 'clean']
n_trials = 1
startprob = np.array([0.6, 0.4])
transmat = np.array([[0.7, 0.3], [0.4, 0.6]])
emissionprob = np.array([[0.1, 0.4, 0.5],
[0.6, 0.3, 0.1]])
h1 = hmm.MultinomialHMM(
n_components=n_components, n_trials=n_trials,
implementation=implementation)
h2 = hmm.CategoricalHMM(
n_components=n_components, implementation=implementation)
h1.startprob_ = startprob
h2.startprob_ = startprob
h1.transmat_ = transmat
h2.transmat_ = transmat
h1.emissionprob_ = emissionprob
h2.emissionprob_ = emissionprob
X1 = np.array([[1, 0, 0],
[0, 1, 0],
[0, 0, 1]])
X2 = [[0], [1], [2]] # different input format for CategoricalHMM
> log_prob1, state_sequence1 = h1.decode(X1, algorithm="viterbi")
hmmlearn/tests/test_multinomial_hmm.py:161:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:340: in decode
sub_log_prob, sub_state_sequence = decoder(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = MultinomialHMM(implementation='scaling', n_components=2, n_trials=1)
X = array([[1, 0, 0],
[0, 1, 0],
[0, 0, 1]])
def _decode_viterbi(self, X):
log_frameprob = self._compute_log_likelihood(X)
> return _hmmc.viterbi(self.startprob_, self.transmat_, log_frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:286: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
------------------------------ Captured log call -------------------------------
WARNING hmmlearn.hmm:hmm.py:883 MultinomialHMM has undergone major changes. The previous version was implementing a CategoricalHMM (a special case of MultinomialHMM). This new implementation follows the standard definition for a Multinomial distribution (e.g. as in https://en.wikipedia.org/wiki/Multinomial_distribution). See these issues for details:
https://github.com/hmmlearn/hmmlearn/issues/335
https://github.com/hmmlearn/hmmlearn/issues/340
__________ TestMultinomialHMM.test_compare_with_categorical_hmm[log] ___________
self = <hmmlearn.tests.test_multinomial_hmm.TestMultinomialHMM object at 0xffff94a89610>
implementation = 'log'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_compare_with_categorical_hmm(self, implementation):
n_components = 2 # ['Rainy', 'Sunny']
n_features = 3 # ['walk', 'shop', 'clean']
n_trials = 1
startprob = np.array([0.6, 0.4])
transmat = np.array([[0.7, 0.3], [0.4, 0.6]])
emissionprob = np.array([[0.1, 0.4, 0.5],
[0.6, 0.3, 0.1]])
h1 = hmm.MultinomialHMM(
n_components=n_components, n_trials=n_trials,
implementation=implementation)
h2 = hmm.CategoricalHMM(
n_components=n_components, implementation=implementation)
h1.startprob_ = startprob
h2.startprob_ = startprob
h1.transmat_ = transmat
h2.transmat_ = transmat
h1.emissionprob_ = emissionprob
h2.emissionprob_ = emissionprob
X1 = np.array([[1, 0, 0],
[0, 1, 0],
[0, 0, 1]])
X2 = [[0], [1], [2]] # different input format for CategoricalHMM
> log_prob1, state_sequence1 = h1.decode(X1, algorithm="viterbi")
hmmlearn/tests/test_multinomial_hmm.py:161:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:340: in decode
sub_log_prob, sub_state_sequence = decoder(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = MultinomialHMM(n_components=2, n_trials=1)
X = array([[1, 0, 0],
[0, 1, 0],
[0, 0, 1]])
def _decode_viterbi(self, X):
log_frameprob = self._compute_log_likelihood(X)
> return _hmmc.viterbi(self.startprob_, self.transmat_, log_frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:286: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
------------------------------ Captured log call -------------------------------
WARNING hmmlearn.hmm:hmm.py:883 MultinomialHMM has undergone major changes. The previous version was implementing a CategoricalHMM (a special case of MultinomialHMM). This new implementation follows the standard definition for a Multinomial distribution (e.g. as in https://en.wikipedia.org/wiki/Multinomial_distribution). See these issues for details:
https://github.com/hmmlearn/hmmlearn/issues/335
https://github.com/hmmlearn/hmmlearn/issues/340
__________________ TestPoissonHMM.test_score_samples[scaling] __________________
self = <hmmlearn.tests.test_poisson_hmm.TestPoissonHMM object at 0xffff94f575c0>
implementation = 'scaling', n_samples = 1000
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_score_samples(self, implementation, n_samples=1000):
h = self.new_hmm(implementation)
X, state_sequence = h.sample(n_samples)
assert X.ndim == 2
assert len(X) == len(state_sequence) == n_samples
> ll, posteriors = h.score_samples(X)
hmmlearn/tests/test_poisson_hmm.py:40:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:201: in score_samples
return self._score(X, lengths, compute_posteriors=True)
hmmlearn/base.py:244: in _score
return impl(
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = PoissonHMM(implementation='scaling', n_components=2, random_state=0)
X = array([[5, 4, 3],
[7, 1, 6],
[6, 1, 4],
...,
[1, 5, 0],
[1, 6, 0],
[2, 3, 0]])
lengths = None
def _score_scaling(self, X, lengths=None, *, compute_posteriors):
log_prob = 0
sub_posteriors = [np.empty((0, self.n_components))]
for sub_X in _utils.split_X_lengths(X, lengths):
frameprob = self._compute_likelihood(sub_X)
> log_probij, fwdlattice, scaling_factors = _hmmc.forward_scaling(
self.startprob_, self.transmat_, frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:272: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
____________________ TestPoissonHMM.test_score_samples[log] ____________________
self = <hmmlearn.tests.test_poisson_hmm.TestPoissonHMM object at 0xffff94f57490>
implementation = 'log', n_samples = 1000
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_score_samples(self, implementation, n_samples=1000):
h = self.new_hmm(implementation)
X, state_sequence = h.sample(n_samples)
assert X.ndim == 2
assert len(X) == len(state_sequence) == n_samples
> ll, posteriors = h.score_samples(X)
hmmlearn/tests/test_poisson_hmm.py:40:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:201: in score_samples
return self._score(X, lengths, compute_posteriors=True)
hmmlearn/base.py:244: in _score
return impl(
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = PoissonHMM(n_components=2, random_state=0)
X = array([[5, 4, 3],
[7, 1, 6],
[6, 1, 4],
...,
[1, 5, 0],
[1, 6, 0],
[2, 3, 0]])
lengths = None
def _score_log(self, X, lengths=None, *, compute_posteriors):
"""
Compute the log probability under the model, as well as posteriors if
*compute_posteriors* is True (otherwise, an empty array is returned
for the latter).
"""
log_prob = 0
sub_posteriors = [np.empty((0, self.n_components))]
for sub_X in _utils.split_X_lengths(X, lengths):
log_frameprob = self._compute_log_likelihood(sub_X)
> log_probij, fwdlattice = _hmmc.forward_log(
self.startprob_, self.transmat_, log_frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:257: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
_______________________ TestPoissonHMM.test_fit[scaling] _______________________
self = <hmmlearn.tests.test_poisson_hmm.TestPoissonHMM object at 0xffff94a83530>
implementation = 'scaling', params = 'stl', n_iter = 5
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_fit(self, implementation, params='stl', n_iter=5):
h = self.new_hmm(implementation)
h.params = params
lengths = np.array([10] * 10)
X, _state_sequence = h.sample(lengths.sum())
# Mess up the parameters and see if we can re-learn them.
np.random.seed(0)
h.startprob_ = normalized(np.random.random(self.n_components))
h.transmat_ = normalized(
np.random.random((self.n_components, self.n_components)),
axis=1)
h.lambdas_ = np.random.gamma(
shape=2, size=(self.n_components, self.n_features))
> assert_log_likelihood_increasing(h, X, lengths, n_iter)
hmmlearn/tests/test_poisson_hmm.py:62:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/tests/__init__.py:46: in assert_log_likelihood_increasing
h.fit(X, lengths=lengths)
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = PoissonHMM(implementation='scaling', init_params='', n_components=2, n_iter=1,
random_state=RandomState(MT19937) at 0xFFFF94885840)
X = array([[5, 4, 3],
[7, 1, 6],
[6, 1, 4],
[3, 0, 4],
[2, 0, 4],
[4, 3, 0],
[0, 5, 1],
[0, 4, 0],
[4, 2, 7],
[4, 4, 0]])
def _fit_scaling(self, X):
frameprob = self._compute_likelihood(X)
> log_prob, fwdlattice, scaling_factors = _hmmc.forward_scaling(
self.startprob_, self.transmat_, frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:855: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
_________________________ TestPoissonHMM.test_fit[log] _________________________
self = <hmmlearn.tests.test_poisson_hmm.TestPoissonHMM object at 0xffff94f03020>
implementation = 'log', params = 'stl', n_iter = 5
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_fit(self, implementation, params='stl', n_iter=5):
h = self.new_hmm(implementation)
h.params = params
lengths = np.array([10] * 10)
X, _state_sequence = h.sample(lengths.sum())
# Mess up the parameters and see if we can re-learn them.
np.random.seed(0)
h.startprob_ = normalized(np.random.random(self.n_components))
h.transmat_ = normalized(
np.random.random((self.n_components, self.n_components)),
axis=1)
h.lambdas_ = np.random.gamma(
shape=2, size=(self.n_components, self.n_features))
> assert_log_likelihood_increasing(h, X, lengths, n_iter)
hmmlearn/tests/test_poisson_hmm.py:62:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/tests/__init__.py:46: in assert_log_likelihood_increasing
h.fit(X, lengths=lengths)
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = PoissonHMM(init_params='', n_components=2, n_iter=1,
random_state=RandomState(MT19937) at 0xFFFF94886440)
X = array([[5, 4, 3],
[7, 1, 6],
[6, 1, 4],
[3, 0, 4],
[2, 0, 4],
[4, 3, 0],
[0, 5, 1],
[0, 4, 0],
[4, 2, 7],
[4, 4, 0]])
def _fit_log(self, X):
log_frameprob = self._compute_log_likelihood(X)
> log_prob, fwdlattice = _hmmc.forward_log(
self.startprob_, self.transmat_, log_frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:864: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
___________________ TestPoissonHMM.test_fit_lambdas[scaling] ___________________
self = <hmmlearn.tests.test_poisson_hmm.TestPoissonHMM object at 0xffff94f02e00>
implementation = 'scaling'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_fit_lambdas(self, implementation):
> self.test_fit(implementation, 'l')
hmmlearn/tests/test_poisson_hmm.py:66:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/tests/test_poisson_hmm.py:62: in test_fit
assert_log_likelihood_increasing(h, X, lengths, n_iter)
hmmlearn/tests/__init__.py:46: in assert_log_likelihood_increasing
h.fit(X, lengths=lengths)
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = PoissonHMM(implementation='scaling', init_params='', n_components=2, n_iter=1,
params='l', random_state=RandomState(MT19937) at 0xFFFF94791C40)
X = array([[5, 4, 3],
[7, 1, 6],
[6, 1, 4],
[3, 0, 4],
[2, 0, 4],
[4, 3, 0],
[0, 5, 1],
[0, 4, 0],
[4, 2, 7],
[4, 4, 0]])
def _fit_scaling(self, X):
frameprob = self._compute_likelihood(X)
> log_prob, fwdlattice, scaling_factors = _hmmc.forward_scaling(
self.startprob_, self.transmat_, frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:855: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
_____________________ TestPoissonHMM.test_fit_lambdas[log] _____________________
self = <hmmlearn.tests.test_poisson_hmm.TestPoissonHMM object at 0xffff94aa4450>
implementation = 'log'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_fit_lambdas(self, implementation):
> self.test_fit(implementation, 'l')
hmmlearn/tests/test_poisson_hmm.py:66:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/tests/test_poisson_hmm.py:62: in test_fit
assert_log_likelihood_increasing(h, X, lengths, n_iter)
hmmlearn/tests/__init__.py:46: in assert_log_likelihood_increasing
h.fit(X, lengths=lengths)
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = PoissonHMM(init_params='', n_components=2, n_iter=1, params='l',
random_state=RandomState(MT19937) at 0xFFFF94791940)
X = array([[5, 4, 3],
[7, 1, 6],
[6, 1, 4],
[3, 0, 4],
[2, 0, 4],
[4, 3, 0],
[0, 5, 1],
[0, 4, 0],
[4, 2, 7],
[4, 4, 0]])
def _fit_log(self, X):
log_frameprob = self._compute_log_likelihood(X)
> log_prob, fwdlattice = _hmmc.forward_log(
self.startprob_, self.transmat_, log_frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:864: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
__________________ TestPoissonHMM.test_fit_with_init[scaling] __________________
self = <hmmlearn.tests.test_poisson_hmm.TestPoissonHMM object at 0xffff94aa5150>
implementation = 'scaling', params = 'stl', n_iter = 5
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_fit_with_init(self, implementation, params='stl', n_iter=5):
lengths = [10] * 10
h = self.new_hmm(implementation)
X, _state_sequence = h.sample(sum(lengths))
# use init_function to initialize paramerters
h = hmm.PoissonHMM(self.n_components, params=params,
init_params=params)
h._init(X, lengths)
> assert_log_likelihood_increasing(h, X, lengths, n_iter)
hmmlearn/tests/test_poisson_hmm.py:79:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/tests/__init__.py:46: in assert_log_likelihood_increasing
h.fit(X, lengths=lengths)
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = PoissonHMM(init_params='', n_components=2, n_iter=1,
random_state=RandomState(MT19937) at 0xFFFF9B17F840)
X = array([[5, 4, 3],
[7, 1, 6],
[6, 1, 4],
[3, 0, 4],
[2, 0, 4],
[4, 3, 0],
[0, 5, 1],
[0, 4, 0],
[4, 2, 7],
[4, 4, 0]])
def _fit_log(self, X):
log_frameprob = self._compute_log_likelihood(X)
> log_prob, fwdlattice = _hmmc.forward_log(
self.startprob_, self.transmat_, log_frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:864: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
____________________ TestPoissonHMM.test_fit_with_init[log] ____________________
self = <hmmlearn.tests.test_poisson_hmm.TestPoissonHMM object at 0xffff94e9b200>
implementation = 'log', params = 'stl', n_iter = 5
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_fit_with_init(self, implementation, params='stl', n_iter=5):
lengths = [10] * 10
h = self.new_hmm(implementation)
X, _state_sequence = h.sample(sum(lengths))
# use init_function to initialize paramerters
h = hmm.PoissonHMM(self.n_components, params=params,
init_params=params)
h._init(X, lengths)
> assert_log_likelihood_increasing(h, X, lengths, n_iter)
hmmlearn/tests/test_poisson_hmm.py:79:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/tests/__init__.py:46: in assert_log_likelihood_increasing
h.fit(X, lengths=lengths)
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = PoissonHMM(init_params='', n_components=2, n_iter=1,
random_state=RandomState(MT19937) at 0xFFFF9B17F840)
X = array([[5, 4, 3],
[7, 1, 6],
[6, 1, 4],
[3, 0, 4],
[2, 0, 4],
[4, 3, 0],
[0, 5, 1],
[0, 4, 0],
[4, 2, 7],
[4, 4, 0]])
def _fit_log(self, X):
log_frameprob = self._compute_log_likelihood(X)
> log_prob, fwdlattice = _hmmc.forward_log(
self.startprob_, self.transmat_, log_frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:864: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
____________________ TestPoissonHMM.test_criterion[scaling] ____________________
self = <hmmlearn.tests.test_poisson_hmm.TestPoissonHMM object at 0xffff94e9b2f0>
implementation = 'scaling'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_criterion(self, implementation):
random_state = check_random_state(412)
m1 = self.new_hmm(implementation)
X, _ = m1.sample(2000, random_state=random_state)
aic = []
bic = []
ns = [2, 3, 4]
for n in ns:
h = hmm.PoissonHMM(n, n_iter=500,
random_state=random_state, implementation=implementation)
> h.fit(X)
hmmlearn/tests/test_poisson_hmm.py:93:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = PoissonHMM(implementation='scaling', n_components=2, n_iter=500,
random_state=RandomState(MT19937) at 0xFFFF94885440)
X = array([[1, 5, 0],
[3, 5, 0],
[4, 1, 4],
...,
[1, 4, 0],
[3, 6, 0],
[5, 0, 4]])
def _fit_scaling(self, X):
frameprob = self._compute_likelihood(X)
> log_prob, fwdlattice, scaling_factors = _hmmc.forward_scaling(
self.startprob_, self.transmat_, frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:855: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
______________________ TestPoissonHMM.test_criterion[log] ______________________
self = <hmmlearn.tests.test_poisson_hmm.TestPoissonHMM object at 0xffff94a6fa10>
implementation = 'log'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_criterion(self, implementation):
random_state = check_random_state(412)
m1 = self.new_hmm(implementation)
X, _ = m1.sample(2000, random_state=random_state)
aic = []
bic = []
ns = [2, 3, 4]
for n in ns:
h = hmm.PoissonHMM(n, n_iter=500,
random_state=random_state, implementation=implementation)
> h.fit(X)
hmmlearn/tests/test_poisson_hmm.py:93:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = PoissonHMM(n_components=2, n_iter=500,
random_state=RandomState(MT19937) at 0xFFFF94885840)
X = array([[1, 5, 0],
[3, 5, 0],
[4, 1, 4],
...,
[1, 4, 0],
[3, 6, 0],
[5, 0, 4]])
def _fit_log(self, X):
log_frameprob = self._compute_log_likelihood(X)
> log_prob, fwdlattice = _hmmc.forward_log(
self.startprob_, self.transmat_, log_frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:864: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
_____________ TestVariationalCategorical.test_init_priors[scaling] _____________
self = <hmmlearn.tests.test_variational_categorical.TestVariationalCategorical object at 0xffff94af6ad0>
implementation = 'scaling'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_init_priors(self, implementation):
sequences, lengths = self.get_from_one_beal(7, 100, None)
model = vhmm.VariationalCategoricalHMM(
4, n_iter=500, random_state=1984, init_params="",
implementation=implementation)
model.pi_prior_ = np.full((4,), .25)
model.pi_posterior_ = np.full((4,), 7/4)
model.transmat_prior_ = np.full((4, 4), .25)
model.transmat_posterior_ = np.full((4, 4), 7/4)
model.emissionprob_prior_ = np.full((4, 3), 1/3)
model.emissionprob_posterior_ = np.asarray([[.3, .4, .3],
[.8, .1, .1],
[.2, .2, .6],
[.2, .6, .2]])
> assert_log_likelihood_increasing(model, sequences, lengths, 10)
hmmlearn/tests/test_variational_categorical.py:73:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/tests/__init__.py:46: in assert_log_likelihood_increasing
h.fit(X, lengths=lengths)
hmmlearn/_emissions.py:27: in <lambda>
return functools.wraps(func)(lambda *args, **kwargs: func(*args, **kwargs))
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = VariationalCategoricalHMM(implementation='scaling', init_params='',
n_components=4, n_features=3, n_iter=1,
random_state=1984)
X = array([[0],
[1],
[2],
[0],
[1],
[2],
[0],
[1],
[2],
[0]... [0],
[1],
[2],
[0],
[1],
[2],
[0],
[1],
[2],
[0]])
def _fit_scaling(self, X):
frameprob = self._compute_subnorm_likelihood(X)
> logprob, fwdlattice, scaling_factors = _hmmc.forward_scaling(
self.startprob_subnorm_, self.transmat_subnorm_, frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:1080: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
_______________ TestVariationalCategorical.test_init_priors[log] _______________
self = <hmmlearn.tests.test_variational_categorical.TestVariationalCategorical object at 0xffff94af6e90>
implementation = 'log'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_init_priors(self, implementation):
sequences, lengths = self.get_from_one_beal(7, 100, None)
model = vhmm.VariationalCategoricalHMM(
4, n_iter=500, random_state=1984, init_params="",
implementation=implementation)
model.pi_prior_ = np.full((4,), .25)
model.pi_posterior_ = np.full((4,), 7/4)
model.transmat_prior_ = np.full((4, 4), .25)
model.transmat_posterior_ = np.full((4, 4), 7/4)
model.emissionprob_prior_ = np.full((4, 3), 1/3)
model.emissionprob_posterior_ = np.asarray([[.3, .4, .3],
[.8, .1, .1],
[.2, .2, .6],
[.2, .6, .2]])
> assert_log_likelihood_increasing(model, sequences, lengths, 10)
hmmlearn/tests/test_variational_categorical.py:73:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/tests/__init__.py:46: in assert_log_likelihood_increasing
h.fit(X, lengths=lengths)
hmmlearn/_emissions.py:27: in <lambda>
return functools.wraps(func)(lambda *args, **kwargs: func(*args, **kwargs))
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = VariationalCategoricalHMM(init_params='', n_components=4, n_features=3,
n_iter=1, random_state=1984)
X = array([[1],
[2],
[0],
[1],
[2],
[0],
[1],
[2],
[0],
[1]... [1],
[2],
[0],
[1],
[2],
[0],
[1],
[2],
[0],
[1]])
def _fit_log(self, X):
framelogprob = self._compute_subnorm_log_likelihood(X)
> logprob, fwdlattice = _hmmc.forward_log(
self.startprob_subnorm_, self.transmat_subnorm_, framelogprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:1091: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
_____________ TestVariationalCategorical.test_n_features[scaling] ______________
self = <hmmlearn.tests.test_variational_categorical.TestVariationalCategorical object at 0xffff94f57bb0>
implementation = 'scaling'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_n_features(self, implementation):
sequences, lengths = self.get_from_one_beal(7, 100, None)
# Learn n_Features
model = vhmm.VariationalCategoricalHMM(
4, implementation=implementation)
> assert_log_likelihood_increasing(model, sequences, lengths, 10)
hmmlearn/tests/test_variational_categorical.py:82:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/tests/__init__.py:46: in assert_log_likelihood_increasing
h.fit(X, lengths=lengths)
hmmlearn/_emissions.py:27: in <lambda>
return functools.wraps(func)(lambda *args, **kwargs: func(*args, **kwargs))
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = VariationalCategoricalHMM(implementation='scaling', init_params='',
n_components=4, n_features=3, n_iter=1)
X = array([[0],
[1],
[2],
[0],
[1],
[2],
[0],
[1],
[2],
[0]... [0],
[1],
[2],
[0],
[1],
[2],
[0],
[1],
[2],
[0]])
def _fit_scaling(self, X):
frameprob = self._compute_subnorm_likelihood(X)
> logprob, fwdlattice, scaling_factors = _hmmc.forward_scaling(
self.startprob_subnorm_, self.transmat_subnorm_, frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:1080: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
_______________ TestVariationalCategorical.test_n_features[log] ________________
self = <hmmlearn.tests.test_variational_categorical.TestVariationalCategorical object at 0xffff94f57ce0>
implementation = 'log'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_n_features(self, implementation):
sequences, lengths = self.get_from_one_beal(7, 100, None)
# Learn n_Features
model = vhmm.VariationalCategoricalHMM(
4, implementation=implementation)
> assert_log_likelihood_increasing(model, sequences, lengths, 10)
hmmlearn/tests/test_variational_categorical.py:82:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/tests/__init__.py:46: in assert_log_likelihood_increasing
h.fit(X, lengths=lengths)
hmmlearn/_emissions.py:27: in <lambda>
return functools.wraps(func)(lambda *args, **kwargs: func(*args, **kwargs))
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = VariationalCategoricalHMM(init_params='', n_components=4, n_features=3,
n_iter=1)
X = array([[1],
[2],
[0],
[1],
[2],
[0],
[1],
[2],
[0],
[1]... [1],
[2],
[0],
[1],
[2],
[0],
[1],
[2],
[0],
[1]])
def _fit_log(self, X):
framelogprob = self._compute_subnorm_log_likelihood(X)
> logprob, fwdlattice = _hmmc.forward_log(
self.startprob_subnorm_, self.transmat_subnorm_, framelogprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:1091: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
________ TestVariationalCategorical.test_init_incorrect_priors[scaling] ________
self = <hmmlearn.tests.test_variational_categorical.TestVariationalCategorical object at 0xffff949a0b90>
implementation = 'scaling'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_init_incorrect_priors(self, implementation):
sequences, lengths = self.get_from_one_beal(7, 100, None)
# Test startprob shape
with pytest.raises(ValueError):
model = vhmm.VariationalCategoricalHMM(
4, n_iter=500, random_state=1984, init_params="te",
implementation=implementation)
model.startprob_prior_ = np.full((3,), .25)
model.startprob_posterior_ = np.full((4,), 7/4)
assert_log_likelihood_increasing(model, sequences, lengths, 10)
with pytest.raises(ValueError):
model = vhmm.VariationalCategoricalHMM(
4, n_iter=500, random_state=1984, init_params="te",
implementation=implementation)
model.startprob_prior_ = np.full((4,), .25)
model.startprob_posterior_ = np.full((3,), 7/4)
assert_log_likelihood_increasing(model, sequences, lengths, 10)
# Test transmat shape
with pytest.raises(ValueError):
model = vhmm.VariationalCategoricalHMM(
4, n_iter=500, random_state=1984, init_params="se",
implementation=implementation)
model.transmat_prior_ = np.full((3, 3), .25)
model.transmat_posterior_ = np.full((4, 4), .25)
assert_log_likelihood_increasing(model, sequences, lengths, 10)
with pytest.raises(ValueError):
model = vhmm.VariationalCategoricalHMM(
4, n_iter=500, random_state=1984, init_params="se",
implementation=implementation)
model.transmat_prior_ = np.full((4, 4), .25)
model.transmat_posterior_ = np.full((3, 3), 7/4)
assert_log_likelihood_increasing(model, sequences, lengths, 10)
# Test emission shape
with pytest.raises(ValueError):
model = vhmm.VariationalCategoricalHMM(
4, n_iter=500, random_state=1984, init_params="st",
implementation=implementation)
model.emissionprob_prior_ = np.full((3, 3), 1/3)
model.emissionprob_posterior_ = np.asarray([[.3, .4, .3],
[.8, .1, .1],
[.2, .2, .6],
[.2, .6, .2]])
assert_log_likelihood_increasing(model, sequences, lengths, 10)
# Test too many n_features
with pytest.raises(ValueError):
model = vhmm.VariationalCategoricalHMM(
4, n_iter=500, random_state=1984, init_params="se",
implementation=implementation)
model.emissionprob_prior_ = np.full((4, 4), 7/4)
model.emissionprob_posterior_ = np.full((4, 4), .25)
model.n_features_ = 10
assert_log_likelihood_increasing(model, sequences, lengths, 10)
# Too small n_features
with pytest.raises(ValueError):
model = vhmm.VariationalCategoricalHMM(
4, n_iter=500, random_state=1984, init_params="se",
implementation=implementation)
model.emissionprob_prior_ = np.full((4, 4), 7/4)
model.emissionprob_posterior_ = np.full((4, 4), .25)
model.n_features_ = 1
assert_log_likelihood_increasing(model, sequences, lengths, 10)
# Test that setting the desired prior value works
model = vhmm.VariationalCategoricalHMM(
4, n_iter=500, random_state=1984, init_params="ste",
implementation=implementation,
startprob_prior=1, transmat_prior=2, emissionprob_prior=3)
> assert_log_likelihood_increasing(model, sequences, lengths, 10)
hmmlearn/tests/test_variational_categorical.py:191:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/tests/__init__.py:46: in assert_log_likelihood_increasing
h.fit(X, lengths=lengths)
hmmlearn/_emissions.py:27: in <lambda>
return functools.wraps(func)(lambda *args, **kwargs: func(*args, **kwargs))
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = VariationalCategoricalHMM(emissionprob_prior=3, implementation='scaling',
init_params='', n_...,
n_iter=1, random_state=1984, startprob_prior=1,
transmat_prior=2)
X = array([[1],
[2],
[0],
[1],
[2],
[0],
[1],
[2],
[0],
[1]... [1],
[2],
[0],
[1],
[2],
[0],
[1],
[2],
[0],
[1]])
def _fit_scaling(self, X):
frameprob = self._compute_subnorm_likelihood(X)
> logprob, fwdlattice, scaling_factors = _hmmc.forward_scaling(
self.startprob_subnorm_, self.transmat_subnorm_, frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:1080: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
__________ TestVariationalCategorical.test_init_incorrect_priors[log] __________
self = <hmmlearn.tests.test_variational_categorical.TestVariationalCategorical object at 0xffff94f03570>
implementation = 'log'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_init_incorrect_priors(self, implementation):
sequences, lengths = self.get_from_one_beal(7, 100, None)
# Test startprob shape
with pytest.raises(ValueError):
model = vhmm.VariationalCategoricalHMM(
4, n_iter=500, random_state=1984, init_params="te",
implementation=implementation)
model.startprob_prior_ = np.full((3,), .25)
model.startprob_posterior_ = np.full((4,), 7/4)
assert_log_likelihood_increasing(model, sequences, lengths, 10)
with pytest.raises(ValueError):
model = vhmm.VariationalCategoricalHMM(
4, n_iter=500, random_state=1984, init_params="te",
implementation=implementation)
model.startprob_prior_ = np.full((4,), .25)
model.startprob_posterior_ = np.full((3,), 7/4)
assert_log_likelihood_increasing(model, sequences, lengths, 10)
# Test transmat shape
with pytest.raises(ValueError):
model = vhmm.VariationalCategoricalHMM(
4, n_iter=500, random_state=1984, init_params="se",
implementation=implementation)
model.transmat_prior_ = np.full((3, 3), .25)
model.transmat_posterior_ = np.full((4, 4), .25)
assert_log_likelihood_increasing(model, sequences, lengths, 10)
with pytest.raises(ValueError):
model = vhmm.VariationalCategoricalHMM(
4, n_iter=500, random_state=1984, init_params="se",
implementation=implementation)
model.transmat_prior_ = np.full((4, 4), .25)
model.transmat_posterior_ = np.full((3, 3), 7/4)
assert_log_likelihood_increasing(model, sequences, lengths, 10)
# Test emission shape
with pytest.raises(ValueError):
model = vhmm.VariationalCategoricalHMM(
4, n_iter=500, random_state=1984, init_params="st",
implementation=implementation)
model.emissionprob_prior_ = np.full((3, 3), 1/3)
model.emissionprob_posterior_ = np.asarray([[.3, .4, .3],
[.8, .1, .1],
[.2, .2, .6],
[.2, .6, .2]])
assert_log_likelihood_increasing(model, sequences, lengths, 10)
# Test too many n_features
with pytest.raises(ValueError):
model = vhmm.VariationalCategoricalHMM(
4, n_iter=500, random_state=1984, init_params="se",
implementation=implementation)
model.emissionprob_prior_ = np.full((4, 4), 7/4)
model.emissionprob_posterior_ = np.full((4, 4), .25)
model.n_features_ = 10
assert_log_likelihood_increasing(model, sequences, lengths, 10)
# Too small n_features
with pytest.raises(ValueError):
model = vhmm.VariationalCategoricalHMM(
4, n_iter=500, random_state=1984, init_params="se",
implementation=implementation)
model.emissionprob_prior_ = np.full((4, 4), 7/4)
model.emissionprob_posterior_ = np.full((4, 4), .25)
model.n_features_ = 1
assert_log_likelihood_increasing(model, sequences, lengths, 10)
# Test that setting the desired prior value works
model = vhmm.VariationalCategoricalHMM(
4, n_iter=500, random_state=1984, init_params="ste",
implementation=implementation,
startprob_prior=1, transmat_prior=2, emissionprob_prior=3)
> assert_log_likelihood_increasing(model, sequences, lengths, 10)
hmmlearn/tests/test_variational_categorical.py:191:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/tests/__init__.py:46: in assert_log_likelihood_increasing
h.fit(X, lengths=lengths)
hmmlearn/_emissions.py:27: in <lambda>
return functools.wraps(func)(lambda *args, **kwargs: func(*args, **kwargs))
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = VariationalCategoricalHMM(emissionprob_prior=3, init_params='', n_components=4,
n_features=3, n_iter=1, random_state=1984,
startprob_prior=1, transmat_prior=2)
X = array([[2],
[0],
[1],
[2],
[0],
[1],
[2],
[0],
[1],
[2]... [2],
[0],
[1],
[2],
[0],
[1],
[2],
[0],
[1],
[2]])
def _fit_log(self, X):
framelogprob = self._compute_subnorm_log_likelihood(X)
> logprob, fwdlattice = _hmmc.forward_log(
self.startprob_subnorm_, self.transmat_subnorm_, framelogprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:1091: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
______________ TestVariationalCategorical.test_fit_beal[scaling] _______________
self = <hmmlearn.tests.test_variational_categorical.TestVariationalCategorical object at 0xffff94f03240>
implementation = 'scaling'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_fit_beal(self, implementation):
rs = check_random_state(1984)
m1, m2, m3 = self.get_beal_models()
sequences = []
lengths = []
for i in range(7):
for m in [m1, m2, m3]:
sequences.append(m.sample(39, random_state=rs)[0])
lengths.append(len(sequences[-1]))
sequences = np.concatenate(sequences)
model = vhmm.VariationalCategoricalHMM(12, n_iter=500,
implementation=implementation,
tol=1e-6,
random_state=rs,
verbose=False)
> assert_log_likelihood_increasing(model, sequences, lengths, 100)
hmmlearn/tests/test_variational_categorical.py:213:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/tests/__init__.py:46: in assert_log_likelihood_increasing
h.fit(X, lengths=lengths)
hmmlearn/_emissions.py:27: in <lambda>
return functools.wraps(func)(lambda *args, **kwargs: func(*args, **kwargs))
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = VariationalCategoricalHMM(implementation='scaling', init_params='',
n_components=12, n_features=3, n_iter=1,
random_state=RandomState(MT19937) at 0xFFFF94790C40)
X = array([[0],
[1],
[2],
[0],
[1],
[2],
[0],
[1],
[2],
[0]... [2],
[0],
[1],
[2],
[0],
[1],
[2],
[0],
[1],
[2]])
def _fit_scaling(self, X):
frameprob = self._compute_subnorm_likelihood(X)
> logprob, fwdlattice, scaling_factors = _hmmc.forward_scaling(
self.startprob_subnorm_, self.transmat_subnorm_, frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:1080: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
________________ TestVariationalCategorical.test_fit_beal[log] _________________
self = <hmmlearn.tests.test_variational_categorical.TestVariationalCategorical object at 0xffff94aa5550>
implementation = 'log'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_fit_beal(self, implementation):
rs = check_random_state(1984)
m1, m2, m3 = self.get_beal_models()
sequences = []
lengths = []
for i in range(7):
for m in [m1, m2, m3]:
sequences.append(m.sample(39, random_state=rs)[0])
lengths.append(len(sequences[-1]))
sequences = np.concatenate(sequences)
model = vhmm.VariationalCategoricalHMM(12, n_iter=500,
implementation=implementation,
tol=1e-6,
random_state=rs,
verbose=False)
> assert_log_likelihood_increasing(model, sequences, lengths, 100)
hmmlearn/tests/test_variational_categorical.py:213:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/tests/__init__.py:46: in assert_log_likelihood_increasing
h.fit(X, lengths=lengths)
hmmlearn/_emissions.py:27: in <lambda>
return functools.wraps(func)(lambda *args, **kwargs: func(*args, **kwargs))
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = VariationalCategoricalHMM(init_params='', n_components=12, n_features=3,
n_iter=1,
random_state=RandomState(MT19937) at 0xFFFF94886F40)
X = array([[0],
[1],
[2],
[0],
[1],
[2],
[0],
[1],
[2],
[0]... [2],
[0],
[1],
[2],
[0],
[1],
[2],
[0],
[1],
[2]])
def _fit_log(self, X):
framelogprob = self._compute_subnorm_log_likelihood(X)
> logprob, fwdlattice = _hmmc.forward_log(
self.startprob_subnorm_, self.transmat_subnorm_, framelogprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:1091: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
_______ TestVariationalCategorical.test_fit_and_compare_with_em[scaling] _______
self = <hmmlearn.tests.test_variational_categorical.TestVariationalCategorical object at 0xffff94aa5650>
implementation = 'scaling'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_fit_and_compare_with_em(self, implementation):
# Explicitly setting Random State to test that certain
# model states will become "unused"
sequences, lengths = self.get_from_one_beal(7, 100, 1984)
model = vhmm.VariationalCategoricalHMM(
4, n_iter=500, random_state=1984,
init_params="e",
implementation=implementation)
vi_uniform_startprob_and_transmat(model, lengths)
> model.fit(sequences, lengths)
hmmlearn/tests/test_variational_categorical.py:225:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/_emissions.py:27: in <lambda>
return functools.wraps(func)(lambda *args, **kwargs: func(*args, **kwargs))
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = VariationalCategoricalHMM(implementation='scaling', init_params='e',
n_components=4, n_features=3, n_iter=500,
random_state=1984)
X = array([[0],
[1],
[2],
[0],
[1],
[2],
[0],
[1],
[2],
[0]... [0],
[1],
[2],
[0],
[1],
[2],
[0],
[1],
[2],
[0]])
def _fit_scaling(self, X):
frameprob = self._compute_subnorm_likelihood(X)
> logprob, fwdlattice, scaling_factors = _hmmc.forward_scaling(
self.startprob_subnorm_, self.transmat_subnorm_, frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:1080: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
_________ TestVariationalCategorical.test_fit_and_compare_with_em[log] _________
self = <hmmlearn.tests.test_variational_categorical.TestVariationalCategorical object at 0xffff94e9b5c0>
implementation = 'log'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_fit_and_compare_with_em(self, implementation):
# Explicitly setting Random State to test that certain
# model states will become "unused"
sequences, lengths = self.get_from_one_beal(7, 100, 1984)
model = vhmm.VariationalCategoricalHMM(
4, n_iter=500, random_state=1984,
init_params="e",
implementation=implementation)
vi_uniform_startprob_and_transmat(model, lengths)
> model.fit(sequences, lengths)
hmmlearn/tests/test_variational_categorical.py:225:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/_emissions.py:27: in <lambda>
return functools.wraps(func)(lambda *args, **kwargs: func(*args, **kwargs))
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = VariationalCategoricalHMM(init_params='e', n_components=4, n_features=3,
n_iter=500, random_state=1984)
X = array([[0],
[1],
[2],
[0],
[1],
[2],
[0],
[1],
[2],
[0]... [0],
[1],
[2],
[0],
[1],
[2],
[0],
[1],
[2],
[0]])
def _fit_log(self, X):
framelogprob = self._compute_subnorm_log_likelihood(X)
> logprob, fwdlattice = _hmmc.forward_log(
self.startprob_subnorm_, self.transmat_subnorm_, framelogprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:1091: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
_______ TestVariationalCategorical.test_fit_length_1_sequences[scaling] ________
self = <hmmlearn.tests.test_variational_categorical.TestVariationalCategorical object at 0xffff94e9b6b0>
implementation = 'scaling'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_fit_length_1_sequences(self, implementation):
sequences1, lengths1 = self.get_from_one_beal(7, 100, 1984)
# Include some length 1 sequences
sequences2, lengths2 = self.get_from_one_beal(1, 1, 1984)
sequences = np.concatenate([sequences1, sequences2])
lengths = np.concatenate([lengths1, lengths2])
model = vhmm.VariationalCategoricalHMM(
4, n_iter=500, random_state=1984,
implementation=implementation)
> assert_log_likelihood_increasing(model, sequences, lengths, 10)
hmmlearn/tests/test_variational_categorical.py:255:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/tests/__init__.py:46: in assert_log_likelihood_increasing
h.fit(X, lengths=lengths)
hmmlearn/_emissions.py:27: in <lambda>
return functools.wraps(func)(lambda *args, **kwargs: func(*args, **kwargs))
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = VariationalCategoricalHMM(implementation='scaling', init_params='',
n_components=4, n_features=3, n_iter=1,
random_state=1984)
X = array([[0],
[1],
[2],
[0],
[1],
[2],
[0],
[1],
[2],
[0]... [0],
[1],
[2],
[0],
[1],
[2],
[0],
[1],
[2],
[0]])
def _fit_scaling(self, X):
frameprob = self._compute_subnorm_likelihood(X)
> logprob, fwdlattice, scaling_factors = _hmmc.forward_scaling(
self.startprob_subnorm_, self.transmat_subnorm_, frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:1080: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
_________ TestVariationalCategorical.test_fit_length_1_sequences[log] __________
self = <hmmlearn.tests.test_variational_categorical.TestVariationalCategorical object at 0xffff94964d70>
implementation = 'log'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_fit_length_1_sequences(self, implementation):
sequences1, lengths1 = self.get_from_one_beal(7, 100, 1984)
# Include some length 1 sequences
sequences2, lengths2 = self.get_from_one_beal(1, 1, 1984)
sequences = np.concatenate([sequences1, sequences2])
lengths = np.concatenate([lengths1, lengths2])
model = vhmm.VariationalCategoricalHMM(
4, n_iter=500, random_state=1984,
implementation=implementation)
> assert_log_likelihood_increasing(model, sequences, lengths, 10)
hmmlearn/tests/test_variational_categorical.py:255:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/tests/__init__.py:46: in assert_log_likelihood_increasing
h.fit(X, lengths=lengths)
hmmlearn/_emissions.py:27: in <lambda>
return functools.wraps(func)(lambda *args, **kwargs: func(*args, **kwargs))
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = VariationalCategoricalHMM(init_params='', n_components=4, n_features=3,
n_iter=1, random_state=1984)
X = array([[0],
[1],
[2],
[0],
[1],
[2],
[0],
[1],
[2],
[0]... [0],
[1],
[2],
[0],
[1],
[2],
[0],
[1],
[2],
[0]])
def _fit_log(self, X):
framelogprob = self._compute_subnorm_log_likelihood(X)
> logprob, fwdlattice = _hmmc.forward_log(
self.startprob_subnorm_, self.transmat_subnorm_, framelogprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:1091: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
______________________ TestFull.test_random_fit[scaling] _______________________
self = <hmmlearn.tests.test_variational_gaussian.TestFull object at 0xffff94f57950>
implementation = 'scaling', params = 'stmc', n_features = 3, n_components = 3
kwargs = {}
h = GaussianHMM(covariance_type='full', implementation='scaling', init_params='',
n_components=3)
rs = RandomState(MT19937) at 0xFFFF94791240, lengths = [200, 200, 200, 200, 200]
X = array([[ -6.86811158, -15.5218548 , 2.57129256],
[ -4.58815074, -16.43758315, 3.29235714],
[ -6.5599..., 3.10129119],
[ -8.58810682, 5.49343563, 8.40750902],
[ -6.98040052, -16.12864527, 2.64082744]])
_state_sequence = array([1, 1, 1, 2, 2, 2, 2, 2, 2, 2, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 2, 0,
0, 0, 0, 0, 2, 0, 0, 0, 1, 2, 0, 1, 0,...0, 2, 1,
1, 0, 0, 1, 1, 2, 0, 0, 0, 0, 2, 2, 0, 2, 2, 0, 0, 0, 2, 1, 0, 1,
2, 1, 0, 2, 2, 2, 0, 1, 0, 1])
model = VariationalGaussianHMM(implementation='scaling', init_params='', n_components=3,
n_iter=1,
random_state=RandomState(MT19937) at 0xFFFF94791240,
tol=1e-09)
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_random_fit(self, implementation, params='stmc', n_features=3,
n_components=3, **kwargs):
h = hmm.GaussianHMM(n_components, self.covariance_type,
implementation=implementation, init_params="")
rs = check_random_state(1)
h.startprob_ = normalized(rs.rand(n_components))
h.transmat_ = normalized(
rs.rand(n_components, n_components), axis=1)
h.means_ = rs.randint(-20, 20, (n_components, n_features))
h.covars_ = make_covar_matrix(
self.covariance_type, n_components, n_features, random_state=rs)
lengths = [200] * 5
X, _state_sequence = h.sample(sum(lengths), random_state=rs)
# Now learn a model
model = vhmm.VariationalGaussianHMM(
n_components, n_iter=50, tol=1e-9, random_state=rs,
covariance_type=self.covariance_type,
implementation=implementation)
> assert_log_likelihood_increasing(model, X, lengths, n_iter=10)
hmmlearn/tests/test_variational_gaussian.py:59:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/tests/__init__.py:46: in assert_log_likelihood_increasing
h.fit(X, lengths=lengths)
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = VariationalGaussianHMM(implementation='scaling', init_params='', n_components=3,
n_iter=1,
random_state=RandomState(MT19937) at 0xFFFF94791240,
tol=1e-09)
X = array([[ -6.86811158, -15.5218548 , 2.57129256],
[ -4.58815074, -16.43758315, 3.29235714],
[ -6.5599..., 12.30542549],
[ 3.45864836, 9.93266313, 13.33197942],
[ 2.81248345, 8.96100579, 10.47967146]])
def _fit_scaling(self, X):
frameprob = self._compute_subnorm_likelihood(X)
> logprob, fwdlattice, scaling_factors = _hmmc.forward_scaling(
self.startprob_subnorm_, self.transmat_subnorm_, frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:1080: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
________________________ TestFull.test_random_fit[log] _________________________
self = <hmmlearn.tests.test_variational_gaussian.TestFull object at 0xffff94914c30>
implementation = 'log', params = 'stmc', n_features = 3, n_components = 3
kwargs = {}
h = GaussianHMM(covariance_type='full', init_params='', n_components=3)
rs = RandomState(MT19937) at 0xFFFF94792A40, lengths = [200, 200, 200, 200, 200]
X = array([[ -6.86811158, -15.5218548 , 2.57129256],
[ -4.58815074, -16.43758315, 3.29235714],
[ -6.5599..., 3.10129119],
[ -8.58810682, 5.49343563, 8.40750902],
[ -6.98040052, -16.12864527, 2.64082744]])
_state_sequence = array([1, 1, 1, 2, 2, 2, 2, 2, 2, 2, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 2, 0,
0, 0, 0, 0, 2, 0, 0, 0, 1, 2, 0, 1, 0,...0, 2, 1,
1, 0, 0, 1, 1, 2, 0, 0, 0, 0, 2, 2, 0, 2, 2, 0, 0, 0, 2, 1, 0, 1,
2, 1, 0, 2, 2, 2, 0, 1, 0, 1])
model = VariationalGaussianHMM(init_params='', n_components=3, n_iter=1,
random_state=RandomState(MT19937) at 0xFFFF94792A40,
tol=1e-09)
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_random_fit(self, implementation, params='stmc', n_features=3,
n_components=3, **kwargs):
h = hmm.GaussianHMM(n_components, self.covariance_type,
implementation=implementation, init_params="")
rs = check_random_state(1)
h.startprob_ = normalized(rs.rand(n_components))
h.transmat_ = normalized(
rs.rand(n_components, n_components), axis=1)
h.means_ = rs.randint(-20, 20, (n_components, n_features))
h.covars_ = make_covar_matrix(
self.covariance_type, n_components, n_features, random_state=rs)
lengths = [200] * 5
X, _state_sequence = h.sample(sum(lengths), random_state=rs)
# Now learn a model
model = vhmm.VariationalGaussianHMM(
n_components, n_iter=50, tol=1e-9, random_state=rs,
covariance_type=self.covariance_type,
implementation=implementation)
> assert_log_likelihood_increasing(model, X, lengths, n_iter=10)
hmmlearn/tests/test_variational_gaussian.py:59:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/tests/__init__.py:46: in assert_log_likelihood_increasing
h.fit(X, lengths=lengths)
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = VariationalGaussianHMM(init_params='', n_components=3, n_iter=1,
random_state=RandomState(MT19937) at 0xFFFF94792A40,
tol=1e-09)
X = array([[ -6.86811158, -15.5218548 , 2.57129256],
[ -4.58815074, -16.43758315, 3.29235714],
[ -6.5599..., 12.30542549],
[ 3.45864836, 9.93266313, 13.33197942],
[ 2.81248345, 8.96100579, 10.47967146]])
def _fit_log(self, X):
framelogprob = self._compute_subnorm_log_likelihood(X)
> logprob, fwdlattice = _hmmc.forward_log(
self.startprob_subnorm_, self.transmat_subnorm_, framelogprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:1091: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
______________ TestFull.test_fit_mcgrory_titterington1d[scaling] _______________
self = <hmmlearn.tests.test_variational_gaussian.TestFull object at 0xffff949a1fd0>
implementation = 'scaling'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_fit_mcgrory_titterington1d(self, implementation):
random_state = check_random_state(234234)
# Setup to assure convergence
sequences, lengths = get_sequences(500, 1,
model=get_mcgrory_titterington(),
rs=random_state)
model = vhmm.VariationalGaussianHMM(
5, n_iter=1000, tol=1e-9, random_state=random_state,
init_params="mc",
covariance_type=self.covariance_type,
implementation=implementation)
vi_uniform_startprob_and_transmat(model, lengths)
> model.fit(sequences, lengths)
hmmlearn/tests/test_variational_gaussian.py:75:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = VariationalGaussianHMM(implementation='scaling', init_params='mc',
n_components=5, n_iter=1000,
random_state=RandomState(MT19937) at 0xFFFF94792E40,
tol=1e-09)
X = array([[-2.11177854],
[ 1.7446186 ],
[ 2.05590346],
[ 0.02955277],
[ 1.87276123],
[...087992],
[-0.78888704],
[ 3.01000758],
[-1.12831821],
[ 2.30176638],
[-0.90718994]])
def _fit_scaling(self, X):
frameprob = self._compute_subnorm_likelihood(X)
> logprob, fwdlattice, scaling_factors = _hmmc.forward_scaling(
self.startprob_subnorm_, self.transmat_subnorm_, frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:1080: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
________________ TestFull.test_fit_mcgrory_titterington1d[log] _________________
self = <hmmlearn.tests.test_variational_gaussian.TestFull object at 0xffff94f039b0>
implementation = 'log'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_fit_mcgrory_titterington1d(self, implementation):
random_state = check_random_state(234234)
# Setup to assure convergence
sequences, lengths = get_sequences(500, 1,
model=get_mcgrory_titterington(),
rs=random_state)
model = vhmm.VariationalGaussianHMM(
5, n_iter=1000, tol=1e-9, random_state=random_state,
init_params="mc",
covariance_type=self.covariance_type,
implementation=implementation)
vi_uniform_startprob_and_transmat(model, lengths)
> model.fit(sequences, lengths)
hmmlearn/tests/test_variational_gaussian.py:75:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = VariationalGaussianHMM(init_params='mc', n_components=5, n_iter=1000,
random_state=RandomState(MT19937) at 0xFFFF94793140,
tol=1e-09)
X = array([[-2.11177854],
[ 1.7446186 ],
[ 2.05590346],
[ 0.02955277],
[ 1.87276123],
[...087992],
[-0.78888704],
[ 3.01000758],
[-1.12831821],
[ 2.30176638],
[-0.90718994]])
def _fit_log(self, X):
framelogprob = self._compute_subnorm_log_likelihood(X)
> logprob, fwdlattice = _hmmc.forward_log(
self.startprob_subnorm_, self.transmat_subnorm_, framelogprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:1091: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
_________________ TestFull.test_common_initialization[scaling] _________________
self = <hmmlearn.tests.test_variational_gaussian.TestFull object at 0xffff94f03130>
implementation = 'scaling'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_common_initialization(self, implementation):
sequences, lengths = get_sequences(50, 10,
model=get_mcgrory_titterington())
with pytest.raises(ValueError):
model = vhmm.VariationalGaussianHMM(
4, n_iter=500, tol=1e-9,
covariance_type="incorrect",
implementation=implementation)
assert_log_likelihood_increasing(model, sequences, lengths, 10)
with pytest.raises(ValueError):
model = vhmm.VariationalGaussianHMM(
4, n_iter=500, tol=1e-9,
covariance_type="incorrect",
init_params="",
implementation=implementation)
model.startprob_= np.asarray([.25, .25, .25, .25])
model.score(sequences, lengths)
# Manually setup means - should converge
model = vhmm.VariationalGaussianHMM(
4, n_iter=500, tol=1e-9, init_params="stc",
covariance_type=self.covariance_type,
implementation=implementation)
model.means_prior_ = [[1], [1], [1], [1]]
model.means_posterior_ = [[2], [1], [3], [4]]
model.beta_prior_ = [1, 1, 1, 1]
model.beta_posterior_ = [1, 1, 1, 1]
> assert_log_likelihood_increasing(model, sequences, lengths, 10)
hmmlearn/tests/test_variational_gaussian.py:123:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/tests/__init__.py:46: in assert_log_likelihood_increasing
h.fit(X, lengths=lengths)
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = VariationalGaussianHMM(implementation='scaling', init_params='', n_components=4,
n_iter=1, tol=1e-09)
X = array([[ 0.21535104],
[ 2.82985744],
[-0.97185779],
[ 2.89081593],
[-0.66290202],
[...644159],
[ 0.32126301],
[ 2.73373158],
[-0.48778415],
[ 3.2352048 ],
[-2.21829728]])
def _fit_scaling(self, X):
frameprob = self._compute_subnorm_likelihood(X)
> logprob, fwdlattice, scaling_factors = _hmmc.forward_scaling(
self.startprob_subnorm_, self.transmat_subnorm_, frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:1080: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
___________________ TestFull.test_common_initialization[log] ___________________
self = <hmmlearn.tests.test_variational_gaussian.TestFull object at 0xffff94aa5350>
implementation = 'log'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_common_initialization(self, implementation):
sequences, lengths = get_sequences(50, 10,
model=get_mcgrory_titterington())
with pytest.raises(ValueError):
model = vhmm.VariationalGaussianHMM(
4, n_iter=500, tol=1e-9,
covariance_type="incorrect",
implementation=implementation)
assert_log_likelihood_increasing(model, sequences, lengths, 10)
with pytest.raises(ValueError):
model = vhmm.VariationalGaussianHMM(
4, n_iter=500, tol=1e-9,
covariance_type="incorrect",
init_params="",
implementation=implementation)
model.startprob_= np.asarray([.25, .25, .25, .25])
model.score(sequences, lengths)
# Manually setup means - should converge
model = vhmm.VariationalGaussianHMM(
4, n_iter=500, tol=1e-9, init_params="stc",
covariance_type=self.covariance_type,
implementation=implementation)
model.means_prior_ = [[1], [1], [1], [1]]
model.means_posterior_ = [[2], [1], [3], [4]]
model.beta_prior_ = [1, 1, 1, 1]
model.beta_posterior_ = [1, 1, 1, 1]
> assert_log_likelihood_increasing(model, sequences, lengths, 10)
hmmlearn/tests/test_variational_gaussian.py:123:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/tests/__init__.py:46: in assert_log_likelihood_increasing
h.fit(X, lengths=lengths)
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = VariationalGaussianHMM(init_params='', n_components=4, n_iter=1, tol=1e-09)
X = array([[-0.33240202],
[ 1.16575351],
[ 0.76708158],
[-0.16665794],
[-2.0417122 ],
[...612387],
[-1.47774877],
[ 1.99699008],
[ 3.9346355 ],
[-1.84294702],
[-2.14332482]])
def _fit_log(self, X):
framelogprob = self._compute_subnorm_log_likelihood(X)
> logprob, fwdlattice = _hmmc.forward_log(
self.startprob_subnorm_, self.transmat_subnorm_, framelogprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:1091: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
____________________ TestFull.test_initialization[scaling] _____________________
self = <hmmlearn.tests.test_variational_gaussian.TestFull object at 0xffff94af6fd0>
implementation = 'scaling'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_initialization(self, implementation):
random_state = check_random_state(234234)
sequences, lengths = get_sequences(
50, 10, model=get_mcgrory_titterington())
# dof's have wrong shape
with pytest.raises(ValueError):
model = self.new_for_init(implementation)
model.dof_prior_ = [1, 1, 1]
assert_log_likelihood_increasing(model, sequences, lengths, 10)
with pytest.raises(ValueError):
model = self.new_for_init(implementation)
model.dof_posterior_ = [1, 1, 1]
assert_log_likelihood_increasing(model, sequences, lengths, 10)
# scales's have wrong shape
with pytest.raises(ValueError):
model = self.new_for_init(implementation)
model.scale_prior_ = [[[2.]], [[2.]], [[2.]]]
assert_log_likelihood_increasing(model, sequences, lengths, 10)
with pytest.raises(ValueError):
model = self.new_for_init(implementation)
model.scale_posterior_ = [[2.]], [[2.]], [[2.]] # this is wrong
assert_log_likelihood_increasing(model, sequences, lengths, 10)
# Manually setup covariance
with pytest.raises(ValueError):
model = vhmm.VariationalGaussianHMM(
4, n_iter=500, tol=1e-9, init_params="stm",
covariance_type="incorrect",
implementation=implementation)
assert_log_likelihood_increasing(model, sequences, lengths, 10)
# Set priors correctly via params
model = vhmm.VariationalGaussianHMM(
4, n_iter=500, tol=1e-9, random_state=random_state,
covariance_type=self.covariance_type,
implementation=implementation,
means_prior=[[0.], [0.], [0.], [0.]],
beta_prior=[2., 2., 2., 2.],
dof_prior=[2., 2., 2., 2.],
scale_prior=[[[2.]], [[2.]], [[2.]], [[2.]]])
> assert_log_likelihood_increasing(model, sequences, lengths, 10)
hmmlearn/tests/test_variational_gaussian.py:233:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/tests/__init__.py:46: in assert_log_likelihood_increasing
h.fit(X, lengths=lengths)
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = VariationalGaussianHMM(beta_prior=[2.0, 2.0, 2.0, 2.0],
dof_prior=[2.0, 2.0, 2.0, 2.0], impleme...FFF94887840,
scale_prior=[[[2.0]], [[2.0]], [[2.0]], [[2.0]]],
tol=1e-09)
X = array([[-0.97620016],
[ 0.79725115],
[-0.27940365],
[ 3.32645134],
[-2.69876488],
[...774038],
[ 3.83803194],
[-1.46435466],
[ 2.95456941],
[-0.13443947],
[-0.96474541]])
def _fit_scaling(self, X):
frameprob = self._compute_subnorm_likelihood(X)
> logprob, fwdlattice, scaling_factors = _hmmc.forward_scaling(
self.startprob_subnorm_, self.transmat_subnorm_, frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:1080: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
______________________ TestFull.test_initialization[log] _______________________
self = <hmmlearn.tests.test_variational_gaussian.TestFull object at 0xffff94af7110>
implementation = 'log'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_initialization(self, implementation):
random_state = check_random_state(234234)
sequences, lengths = get_sequences(
50, 10, model=get_mcgrory_titterington())
# dof's have wrong shape
with pytest.raises(ValueError):
model = self.new_for_init(implementation)
model.dof_prior_ = [1, 1, 1]
assert_log_likelihood_increasing(model, sequences, lengths, 10)
with pytest.raises(ValueError):
model = self.new_for_init(implementation)
model.dof_posterior_ = [1, 1, 1]
assert_log_likelihood_increasing(model, sequences, lengths, 10)
# scales's have wrong shape
with pytest.raises(ValueError):
model = self.new_for_init(implementation)
model.scale_prior_ = [[[2.]], [[2.]], [[2.]]]
assert_log_likelihood_increasing(model, sequences, lengths, 10)
with pytest.raises(ValueError):
model = self.new_for_init(implementation)
model.scale_posterior_ = [[2.]], [[2.]], [[2.]] # this is wrong
assert_log_likelihood_increasing(model, sequences, lengths, 10)
# Manually setup covariance
with pytest.raises(ValueError):
model = vhmm.VariationalGaussianHMM(
4, n_iter=500, tol=1e-9, init_params="stm",
covariance_type="incorrect",
implementation=implementation)
assert_log_likelihood_increasing(model, sequences, lengths, 10)
# Set priors correctly via params
model = vhmm.VariationalGaussianHMM(
4, n_iter=500, tol=1e-9, random_state=random_state,
covariance_type=self.covariance_type,
implementation=implementation,
means_prior=[[0.], [0.], [0.], [0.]],
beta_prior=[2., 2., 2., 2.],
dof_prior=[2., 2., 2., 2.],
scale_prior=[[[2.]], [[2.]], [[2.]], [[2.]]])
> assert_log_likelihood_increasing(model, sequences, lengths, 10)
hmmlearn/tests/test_variational_gaussian.py:233:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/tests/__init__.py:46: in assert_log_likelihood_increasing
h.fit(X, lengths=lengths)
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = VariationalGaussianHMM(beta_prior=[2.0, 2.0, 2.0, 2.0],
dof_prior=[2.0, 2.0, 2.0, 2.0], init_pa...FFF94792640,
scale_prior=[[[2.0]], [[2.0]], [[2.0]], [[2.0]]],
tol=1e-09)
X = array([[ 1.90962598],
[ 1.38857322],
[ 0.88432176],
[ 1.50437126],
[-1.37679708],
[...987493],
[ 1.1246179 ],
[-2.31770774],
[ 2.39814844],
[ 1.40856394],
[ 2.12694691]])
def _fit_log(self, X):
framelogprob = self._compute_subnorm_log_likelihood(X)
> logprob, fwdlattice = _hmmc.forward_log(
self.startprob_subnorm_, self.transmat_subnorm_, framelogprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:1091: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
______________________ TestTied.test_random_fit[scaling] _______________________
self = <hmmlearn.tests.test_variational_gaussian.TestTied object at 0xffff94914b00>
implementation = 'scaling', params = 'stmc', n_features = 3, n_components = 3
kwargs = {}
h = GaussianHMM(covariance_type='tied', implementation='scaling', init_params='',
n_components=3)
rs = RandomState(MT19937) at 0xFFFF94790140, lengths = [200, 200, 200, 200, 200]
X = array([[ -6.76809081, -17.57929881, 2.65993861],
[ 4.47790401, 10.95422031, 12.25009349],
[ -9.2822..., 2.91189727],
[ 1.47179701, 9.35583105, 10.30599288],
[ -4.00663682, -15.17296134, 2.9706196 ]])
_state_sequence = array([1, 2, 0, 0, 0, 1, 1, 0, 1, 1, 1, 2, 2, 2, 2, 2, 2, 2, 0, 0, 0, 0,
0, 0, 0, 0, 1, 1, 2, 0, 0, 0, 0, 0, 2,...0, 0, 0,
0, 0, 1, 0, 0, 0, 2, 1, 1, 0, 0, 1, 1, 2, 0, 0, 0, 0, 2, 2, 0, 2,
2, 0, 0, 0, 2, 1, 0, 1, 2, 1])
model = VariationalGaussianHMM(covariance_type='tied', implementation='scaling',
init_params='', n_comp...n_iter=1,
random_state=RandomState(MT19937) at 0xFFFF94790140,
tol=1e-09)
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_random_fit(self, implementation, params='stmc', n_features=3,
n_components=3, **kwargs):
h = hmm.GaussianHMM(n_components, self.covariance_type,
implementation=implementation, init_params="")
rs = check_random_state(1)
h.startprob_ = normalized(rs.rand(n_components))
h.transmat_ = normalized(
rs.rand(n_components, n_components), axis=1)
h.means_ = rs.randint(-20, 20, (n_components, n_features))
h.covars_ = make_covar_matrix(
self.covariance_type, n_components, n_features, random_state=rs)
lengths = [200] * 5
X, _state_sequence = h.sample(sum(lengths), random_state=rs)
# Now learn a model
model = vhmm.VariationalGaussianHMM(
n_components, n_iter=50, tol=1e-9, random_state=rs,
covariance_type=self.covariance_type,
implementation=implementation)
> assert_log_likelihood_increasing(model, X, lengths, n_iter=10)
hmmlearn/tests/test_variational_gaussian.py:59:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/tests/__init__.py:46: in assert_log_likelihood_increasing
h.fit(X, lengths=lengths)
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = VariationalGaussianHMM(covariance_type='tied', implementation='scaling',
init_params='', n_comp...n_iter=1,
random_state=RandomState(MT19937) at 0xFFFF94790140,
tol=1e-09)
X = array([[ -6.76809081, -17.57929881, 2.65993861],
[ 4.47790401, 10.95422031, 12.25009349],
[ -9.2822..., 8.29790309],
[ -7.45761904, 8.0443883 , 8.74775768],
[ -7.54100296, 7.27668055, 8.35765657]])
def _fit_scaling(self, X):
frameprob = self._compute_subnorm_likelihood(X)
> logprob, fwdlattice, scaling_factors = _hmmc.forward_scaling(
self.startprob_subnorm_, self.transmat_subnorm_, frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:1080: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
________________________ TestTied.test_random_fit[log] _________________________
self = <hmmlearn.tests.test_variational_gaussian.TestTied object at 0xffff949148a0>
implementation = 'log', params = 'stmc', n_features = 3, n_components = 3
kwargs = {}
h = GaussianHMM(covariance_type='tied', init_params='', n_components=3)
rs = RandomState(MT19937) at 0xFFFF94790940, lengths = [200, 200, 200, 200, 200]
X = array([[ -6.54597428, -14.48319166, 3.52814708],
[ 3.02773721, 8.66210382, 10.95226001],
[ -9.6765..., 1.73843505],
[ 3.90207131, 11.87153515, 12.46452122],
[ -6.04735701, -17.31754837, 1.46456652]])
_state_sequence = array([1, 2, 0, 0, 0, 1, 1, 0, 1, 1, 1, 2, 2, 2, 2, 2, 2, 2, 0, 0, 0, 0,
0, 0, 0, 0, 1, 1, 2, 0, 0, 0, 0, 0, 2,...0, 0, 0,
0, 0, 1, 0, 0, 0, 2, 1, 1, 0, 0, 1, 1, 2, 0, 0, 0, 0, 2, 2, 0, 2,
2, 0, 0, 0, 2, 1, 0, 1, 2, 1])
model = VariationalGaussianHMM(covariance_type='tied', init_params='', n_components=3,
n_iter=1,
random_state=RandomState(MT19937) at 0xFFFF94790940,
tol=1e-09)
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_random_fit(self, implementation, params='stmc', n_features=3,
n_components=3, **kwargs):
h = hmm.GaussianHMM(n_components, self.covariance_type,
implementation=implementation, init_params="")
rs = check_random_state(1)
h.startprob_ = normalized(rs.rand(n_components))
h.transmat_ = normalized(
rs.rand(n_components, n_components), axis=1)
h.means_ = rs.randint(-20, 20, (n_components, n_features))
h.covars_ = make_covar_matrix(
self.covariance_type, n_components, n_features, random_state=rs)
lengths = [200] * 5
X, _state_sequence = h.sample(sum(lengths), random_state=rs)
# Now learn a model
model = vhmm.VariationalGaussianHMM(
n_components, n_iter=50, tol=1e-9, random_state=rs,
covariance_type=self.covariance_type,
implementation=implementation)
> assert_log_likelihood_increasing(model, X, lengths, n_iter=10)
hmmlearn/tests/test_variational_gaussian.py:59:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/tests/__init__.py:46: in assert_log_likelihood_increasing
h.fit(X, lengths=lengths)
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = VariationalGaussianHMM(covariance_type='tied', init_params='', n_components=3,
n_iter=1,
random_state=RandomState(MT19937) at 0xFFFF94790940,
tol=1e-09)
X = array([[ -6.54597428, -14.48319166, 3.52814708],
[ 3.02773721, 8.66210382, 10.95226001],
[ -9.6765..., 10.28703698],
[ -9.27093832, 7.48888941, 7.75556056],
[ -9.50212106, 8.22396714, 7.70516698]])
def _fit_log(self, X):
framelogprob = self._compute_subnorm_log_likelihood(X)
> logprob, fwdlattice = _hmmc.forward_log(
self.startprob_subnorm_, self.transmat_subnorm_, framelogprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:1091: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
______________ TestTied.test_fit_mcgrory_titterington1d[scaling] _______________
self = <hmmlearn.tests.test_variational_gaussian.TestTied object at 0xffff949a2e70>
implementation = 'scaling'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_fit_mcgrory_titterington1d(self, implementation):
random_state = check_random_state(234234)
# Setup to assure convergence
sequences, lengths = get_sequences(500, 1,
model=get_mcgrory_titterington(),
rs=random_state)
model = vhmm.VariationalGaussianHMM(
5, n_iter=1000, tol=1e-9, random_state=random_state,
init_params="mc",
covariance_type=self.covariance_type,
implementation=implementation)
vi_uniform_startprob_and_transmat(model, lengths)
> model.fit(sequences, lengths)
hmmlearn/tests/test_variational_gaussian.py:75:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = VariationalGaussianHMM(covariance_type='tied', implementation='scaling',
init_params='mc', n_co...ter=1000,
random_state=RandomState(MT19937) at 0xFFFF94887840,
tol=1e-09)
X = array([[-2.11177854],
[ 1.7446186 ],
[ 2.05590346],
[ 0.02955277],
[ 1.87276123],
[...087992],
[-0.78888704],
[ 3.01000758],
[-1.12831821],
[ 2.30176638],
[-0.90718994]])
def _fit_scaling(self, X):
frameprob = self._compute_subnorm_likelihood(X)
> logprob, fwdlattice, scaling_factors = _hmmc.forward_scaling(
self.startprob_subnorm_, self.transmat_subnorm_, frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:1080: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
________________ TestTied.test_fit_mcgrory_titterington1d[log] _________________
self = <hmmlearn.tests.test_variational_gaussian.TestTied object at 0xffff94f03790>
implementation = 'log'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_fit_mcgrory_titterington1d(self, implementation):
random_state = check_random_state(234234)
# Setup to assure convergence
sequences, lengths = get_sequences(500, 1,
model=get_mcgrory_titterington(),
rs=random_state)
model = vhmm.VariationalGaussianHMM(
5, n_iter=1000, tol=1e-9, random_state=random_state,
init_params="mc",
covariance_type=self.covariance_type,
implementation=implementation)
vi_uniform_startprob_and_transmat(model, lengths)
> model.fit(sequences, lengths)
hmmlearn/tests/test_variational_gaussian.py:75:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = VariationalGaussianHMM(covariance_type='tied', init_params='mc', n_components=5,
n_iter=1000,
random_state=RandomState(MT19937) at 0xFFFF94886B40,
tol=1e-09)
X = array([[-2.11177854],
[ 1.7446186 ],
[ 2.05590346],
[ 0.02955277],
[ 1.87276123],
[...087992],
[-0.78888704],
[ 3.01000758],
[-1.12831821],
[ 2.30176638],
[-0.90718994]])
def _fit_log(self, X):
framelogprob = self._compute_subnorm_log_likelihood(X)
> logprob, fwdlattice = _hmmc.forward_log(
self.startprob_subnorm_, self.transmat_subnorm_, framelogprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:1091: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
_________________ TestTied.test_common_initialization[scaling] _________________
self = <hmmlearn.tests.test_variational_gaussian.TestTied object at 0xffff94f03350>
implementation = 'scaling'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_common_initialization(self, implementation):
sequences, lengths = get_sequences(50, 10,
model=get_mcgrory_titterington())
with pytest.raises(ValueError):
model = vhmm.VariationalGaussianHMM(
4, n_iter=500, tol=1e-9,
covariance_type="incorrect",
implementation=implementation)
assert_log_likelihood_increasing(model, sequences, lengths, 10)
with pytest.raises(ValueError):
model = vhmm.VariationalGaussianHMM(
4, n_iter=500, tol=1e-9,
covariance_type="incorrect",
init_params="",
implementation=implementation)
model.startprob_= np.asarray([.25, .25, .25, .25])
model.score(sequences, lengths)
# Manually setup means - should converge
model = vhmm.VariationalGaussianHMM(
4, n_iter=500, tol=1e-9, init_params="stc",
covariance_type=self.covariance_type,
implementation=implementation)
model.means_prior_ = [[1], [1], [1], [1]]
model.means_posterior_ = [[2], [1], [3], [4]]
model.beta_prior_ = [1, 1, 1, 1]
model.beta_posterior_ = [1, 1, 1, 1]
> assert_log_likelihood_increasing(model, sequences, lengths, 10)
hmmlearn/tests/test_variational_gaussian.py:123:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/tests/__init__.py:46: in assert_log_likelihood_increasing
h.fit(X, lengths=lengths)
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = VariationalGaussianHMM(covariance_type='tied', implementation='scaling',
init_params='', n_components=4, n_iter=1, tol=1e-09)
X = array([[ 3.02406044],
[ 0.15141778],
[ 0.44490074],
[ 0.92052631],
[-0.18359039],
[...156249],
[ 0.61494698],
[-2.27023399],
[ 2.64757888],
[-2.00572944],
[ 0.08367312]])
def _fit_scaling(self, X):
frameprob = self._compute_subnorm_likelihood(X)
> logprob, fwdlattice, scaling_factors = _hmmc.forward_scaling(
self.startprob_subnorm_, self.transmat_subnorm_, frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:1080: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
___________________ TestTied.test_common_initialization[log] ___________________
self = <hmmlearn.tests.test_variational_gaussian.TestTied object at 0xffff94aa5850>
implementation = 'log'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_common_initialization(self, implementation):
sequences, lengths = get_sequences(50, 10,
model=get_mcgrory_titterington())
with pytest.raises(ValueError):
model = vhmm.VariationalGaussianHMM(
4, n_iter=500, tol=1e-9,
covariance_type="incorrect",
implementation=implementation)
assert_log_likelihood_increasing(model, sequences, lengths, 10)
with pytest.raises(ValueError):
model = vhmm.VariationalGaussianHMM(
4, n_iter=500, tol=1e-9,
covariance_type="incorrect",
init_params="",
implementation=implementation)
model.startprob_= np.asarray([.25, .25, .25, .25])
model.score(sequences, lengths)
# Manually setup means - should converge
model = vhmm.VariationalGaussianHMM(
4, n_iter=500, tol=1e-9, init_params="stc",
covariance_type=self.covariance_type,
implementation=implementation)
model.means_prior_ = [[1], [1], [1], [1]]
model.means_posterior_ = [[2], [1], [3], [4]]
model.beta_prior_ = [1, 1, 1, 1]
model.beta_posterior_ = [1, 1, 1, 1]
> assert_log_likelihood_increasing(model, sequences, lengths, 10)
hmmlearn/tests/test_variational_gaussian.py:123:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/tests/__init__.py:46: in assert_log_likelihood_increasing
h.fit(X, lengths=lengths)
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = VariationalGaussianHMM(covariance_type='tied', init_params='', n_components=4,
n_iter=1, tol=1e-09)
X = array([[-1.09489413],
[-0.12957722],
[-1.73146656],
[ 3.55253037],
[ 2.62945991],
[...229695],
[ 0.93327602],
[ 3.14435486],
[-2.68712136],
[-0.81984256],
[ 3.63942885]])
def _fit_log(self, X):
framelogprob = self._compute_subnorm_log_likelihood(X)
> logprob, fwdlattice = _hmmc.forward_log(
self.startprob_subnorm_, self.transmat_subnorm_, framelogprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:1091: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
____________________ TestTied.test_initialization[scaling] _____________________
self = <hmmlearn.tests.test_variational_gaussian.TestTied object at 0xffff94af74d0>
implementation = 'scaling'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_initialization(self, implementation):
random_state = check_random_state(234234)
sequences, lengths = get_sequences(
50, 10, model=get_mcgrory_titterington())
# dof's have wrong shape
with pytest.raises(ValueError):
model = self.new_for_init(implementation)
model.dof_prior_ = [1]
assert_log_likelihood_increasing(model, sequences, lengths, 10)
with pytest.raises(ValueError):
model = self.new_for_init(implementation)
model.dof_posterior_ = [1]
assert_log_likelihood_increasing(model, sequences, lengths, 10)
# scales's have wrong shape
with pytest.raises(ValueError):
model = self.new_for_init(implementation)
model.scale_prior_ = [[[2]]]
assert_log_likelihood_increasing(model, sequences, lengths, 10)
with pytest.raises(ValueError):
model = self.new_for_init(implementation)
model.scale_posterior_ = [[[2.]], [[2.]], [[2.]]] # this is wrong
assert_log_likelihood_increasing(model, sequences, lengths, 10)
# Manually setup covariance
with pytest.raises(ValueError):
model = vhmm.VariationalGaussianHMM(
4, n_iter=500, tol=1e-9, init_params="stm",
covariance_type="incorrect",
implementation=implementation)
assert_log_likelihood_increasing(model, sequences, lengths, 10)
# Set priors correctly via params
model = vhmm.VariationalGaussianHMM(
4, n_iter=500, tol=1e-9, random_state=random_state,
covariance_type=self.covariance_type,
implementation=implementation,
means_prior=[[0.], [0.], [0.], [0.]],
beta_prior=[2., 2., 2., 2.],
dof_prior=2,
scale_prior=[[2]],
)
> assert_log_likelihood_increasing(model, sequences, lengths, 10)
hmmlearn/tests/test_variational_gaussian.py:318:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/tests/__init__.py:46: in assert_log_likelihood_increasing
h.fit(X, lengths=lengths)
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = VariationalGaussianHMM(beta_prior=[2.0, 2.0, 2.0, 2.0], covariance_type='tied',
dof_prior=2, im... random_state=RandomState(MT19937) at 0xFFFF94792640,
scale_prior=[[2]], tol=1e-09)
X = array([[ 2.7343842 ],
[ 2.01508175],
[ 2.29638889],
[ 1.12585508],
[ 1.67279509],
[...808295],
[-0.79265056],
[-0.27745453],
[ 0.69004695],
[-0.23995418],
[-1.0133645 ]])
def _fit_scaling(self, X):
frameprob = self._compute_subnorm_likelihood(X)
> logprob, fwdlattice, scaling_factors = _hmmc.forward_scaling(
self.startprob_subnorm_, self.transmat_subnorm_, frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:1080: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
______________________ TestTied.test_initialization[log] _______________________
self = <hmmlearn.tests.test_variational_gaussian.TestTied object at 0xffff94af7610>
implementation = 'log'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_initialization(self, implementation):
random_state = check_random_state(234234)
sequences, lengths = get_sequences(
50, 10, model=get_mcgrory_titterington())
# dof's have wrong shape
with pytest.raises(ValueError):
model = self.new_for_init(implementation)
model.dof_prior_ = [1]
assert_log_likelihood_increasing(model, sequences, lengths, 10)
with pytest.raises(ValueError):
model = self.new_for_init(implementation)
model.dof_posterior_ = [1]
assert_log_likelihood_increasing(model, sequences, lengths, 10)
# scales's have wrong shape
with pytest.raises(ValueError):
model = self.new_for_init(implementation)
model.scale_prior_ = [[[2]]]
assert_log_likelihood_increasing(model, sequences, lengths, 10)
with pytest.raises(ValueError):
model = self.new_for_init(implementation)
model.scale_posterior_ = [[[2.]], [[2.]], [[2.]]] # this is wrong
assert_log_likelihood_increasing(model, sequences, lengths, 10)
# Manually setup covariance
with pytest.raises(ValueError):
model = vhmm.VariationalGaussianHMM(
4, n_iter=500, tol=1e-9, init_params="stm",
covariance_type="incorrect",
implementation=implementation)
assert_log_likelihood_increasing(model, sequences, lengths, 10)
# Set priors correctly via params
model = vhmm.VariationalGaussianHMM(
4, n_iter=500, tol=1e-9, random_state=random_state,
covariance_type=self.covariance_type,
implementation=implementation,
means_prior=[[0.], [0.], [0.], [0.]],
beta_prior=[2., 2., 2., 2.],
dof_prior=2,
scale_prior=[[2]],
)
> assert_log_likelihood_increasing(model, sequences, lengths, 10)
hmmlearn/tests/test_variational_gaussian.py:318:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/tests/__init__.py:46: in assert_log_likelihood_increasing
h.fit(X, lengths=lengths)
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = VariationalGaussianHMM(beta_prior=[2.0, 2.0, 2.0, 2.0], covariance_type='tied',
dof_prior=2, in... random_state=RandomState(MT19937) at 0xFFFF94791840,
scale_prior=[[2]], tol=1e-09)
X = array([[-1.51990156],
[-0.77421241],
[ 3.56219686],
[-1.64888838],
[ 2.6276434 ],
[...179403],
[-0.686967 ],
[ 1.27430623],
[-0.31739316],
[ 1.74639412],
[-2.01831639]])
def _fit_log(self, X):
framelogprob = self._compute_subnorm_log_likelihood(X)
> logprob, fwdlattice = _hmmc.forward_log(
self.startprob_subnorm_, self.transmat_subnorm_, framelogprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:1091: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
____________________ TestSpherical.test_random_fit[scaling] ____________________
self = <hmmlearn.tests.test_variational_gaussian.TestSpherical object at 0xffff94914640>
implementation = 'scaling', params = 'stmc', n_features = 3, n_components = 3
kwargs = {}
h = GaussianHMM(covariance_type='spherical', implementation='scaling',
init_params='', n_components=3)
rs = RandomState(MT19937) at 0xFFFF94AA6240, lengths = [200, 200, 200, 200, 200]
X = array([[ -8.80112327, 8.00989019, 9.06698421],
[ -8.93310855, 8.03047065, 8.92124378],
[ -6.1530..., 8.83737591],
[ 2.84752765, 10.20119432, 12.21355309],
[ -8.94111272, 8.15425357, 8.74112105]])
_state_sequence = array([0, 0, 1, 0, 0, 0, 0, 1, 1, 1, 0, 0, 0, 1, 1, 0, 0, 0, 1, 2, 1, 1,
1, 1, 1, 2, 2, 2, 2, 0, 2, 0, 0, 0, 0,...2, 1, 1,
2, 2, 1, 1, 1, 2, 1, 2, 1, 0, 2, 2, 2, 1, 1, 1, 2, 0, 2, 2, 2, 2,
0, 2, 2, 2, 0, 0, 0, 0, 2, 0])
model = VariationalGaussianHMM(covariance_type='spherical', implementation='scaling',
init_params='', n...n_iter=1,
random_state=RandomState(MT19937) at 0xFFFF94AA6240,
tol=1e-09)
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_random_fit(self, implementation, params='stmc', n_features=3,
n_components=3, **kwargs):
h = hmm.GaussianHMM(n_components, self.covariance_type,
implementation=implementation, init_params="")
rs = check_random_state(1)
h.startprob_ = normalized(rs.rand(n_components))
h.transmat_ = normalized(
rs.rand(n_components, n_components), axis=1)
h.means_ = rs.randint(-20, 20, (n_components, n_features))
h.covars_ = make_covar_matrix(
self.covariance_type, n_components, n_features, random_state=rs)
lengths = [200] * 5
X, _state_sequence = h.sample(sum(lengths), random_state=rs)
# Now learn a model
model = vhmm.VariationalGaussianHMM(
n_components, n_iter=50, tol=1e-9, random_state=rs,
covariance_type=self.covariance_type,
implementation=implementation)
> assert_log_likelihood_increasing(model, X, lengths, n_iter=10)
hmmlearn/tests/test_variational_gaussian.py:59:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/tests/__init__.py:46: in assert_log_likelihood_increasing
h.fit(X, lengths=lengths)
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = VariationalGaussianHMM(covariance_type='spherical', implementation='scaling',
init_params='', n...n_iter=1,
random_state=RandomState(MT19937) at 0xFFFF94AA6240,
tol=1e-09)
X = array([[ -8.80112327, 8.00989019, 9.06698421],
[ -8.93310855, 8.03047065, 8.92124378],
[ -6.1530..., 2.96146404],
[ -5.67847522, -16.01739311, 2.72149483],
[ 3.0501041 , 10.1190271 , 11.98035801]])
def _fit_scaling(self, X):
frameprob = self._compute_subnorm_likelihood(X)
> logprob, fwdlattice, scaling_factors = _hmmc.forward_scaling(
self.startprob_subnorm_, self.transmat_subnorm_, frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:1080: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
______________________ TestSpherical.test_random_fit[log] ______________________
self = <hmmlearn.tests.test_variational_gaussian.TestSpherical object at 0xffff949143e0>
implementation = 'log', params = 'stmc', n_features = 3, n_components = 3
kwargs = {}
h = GaussianHMM(covariance_type='spherical', init_params='', n_components=3)
rs = RandomState(MT19937) at 0xFFFF94886B40, lengths = [200, 200, 200, 200, 200]
X = array([[ -8.80112327, 8.00989019, 9.06698421],
[ -8.93310855, 8.03047065, 8.92124378],
[ -6.1530..., 8.83737591],
[ 2.84752765, 10.20119432, 12.21355309],
[ -8.94111272, 8.15425357, 8.74112105]])
_state_sequence = array([0, 0, 1, 0, 0, 0, 0, 1, 1, 1, 0, 0, 0, 1, 1, 0, 0, 0, 1, 2, 1, 1,
1, 1, 1, 2, 2, 2, 2, 0, 2, 0, 0, 0, 0,...2, 1, 1,
2, 2, 1, 1, 1, 2, 1, 2, 1, 0, 2, 2, 2, 1, 1, 1, 2, 0, 2, 2, 2, 2,
0, 2, 2, 2, 0, 0, 0, 0, 2, 0])
model = VariationalGaussianHMM(covariance_type='spherical', init_params='',
n_components=3, n_iter=1,
random_state=RandomState(MT19937) at 0xFFFF94886B40,
tol=1e-09)
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_random_fit(self, implementation, params='stmc', n_features=3,
n_components=3, **kwargs):
h = hmm.GaussianHMM(n_components, self.covariance_type,
implementation=implementation, init_params="")
rs = check_random_state(1)
h.startprob_ = normalized(rs.rand(n_components))
h.transmat_ = normalized(
rs.rand(n_components, n_components), axis=1)
h.means_ = rs.randint(-20, 20, (n_components, n_features))
h.covars_ = make_covar_matrix(
self.covariance_type, n_components, n_features, random_state=rs)
lengths = [200] * 5
X, _state_sequence = h.sample(sum(lengths), random_state=rs)
# Now learn a model
model = vhmm.VariationalGaussianHMM(
n_components, n_iter=50, tol=1e-9, random_state=rs,
covariance_type=self.covariance_type,
implementation=implementation)
> assert_log_likelihood_increasing(model, X, lengths, n_iter=10)
hmmlearn/tests/test_variational_gaussian.py:59:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/tests/__init__.py:46: in assert_log_likelihood_increasing
h.fit(X, lengths=lengths)
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = VariationalGaussianHMM(covariance_type='spherical', init_params='',
n_components=3, n_iter=1,
random_state=RandomState(MT19937) at 0xFFFF94886B40,
tol=1e-09)
X = array([[ -8.80112327, 8.00989019, 9.06698421],
[ -8.93310855, 8.03047065, 8.92124378],
[ -6.1530..., 2.96146404],
[ -5.67847522, -16.01739311, 2.72149483],
[ 3.0501041 , 10.1190271 , 11.98035801]])
def _fit_log(self, X):
framelogprob = self._compute_subnorm_log_likelihood(X)
> logprob, fwdlattice = _hmmc.forward_log(
self.startprob_subnorm_, self.transmat_subnorm_, framelogprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:1091: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
____________ TestSpherical.test_fit_mcgrory_titterington1d[scaling] ____________
self = <hmmlearn.tests.test_variational_gaussian.TestSpherical object at 0xffff949a3d10>
implementation = 'scaling'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_fit_mcgrory_titterington1d(self, implementation):
random_state = check_random_state(234234)
# Setup to assure convergence
sequences, lengths = get_sequences(500, 1,
model=get_mcgrory_titterington(),
rs=random_state)
model = vhmm.VariationalGaussianHMM(
5, n_iter=1000, tol=1e-9, random_state=random_state,
init_params="mc",
covariance_type=self.covariance_type,
implementation=implementation)
vi_uniform_startprob_and_transmat(model, lengths)
> model.fit(sequences, lengths)
hmmlearn/tests/test_variational_gaussian.py:75:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = VariationalGaussianHMM(covariance_type='spherical', implementation='scaling',
init_params='mc',...ter=1000,
random_state=RandomState(MT19937) at 0xFFFF94792040,
tol=1e-09)
X = array([[-2.11177854],
[ 1.7446186 ],
[ 2.05590346],
[ 0.02955277],
[ 1.87276123],
[...087992],
[-0.78888704],
[ 3.01000758],
[-1.12831821],
[ 2.30176638],
[-0.90718994]])
def _fit_scaling(self, X):
frameprob = self._compute_subnorm_likelihood(X)
> logprob, fwdlattice, scaling_factors = _hmmc.forward_scaling(
self.startprob_subnorm_, self.transmat_subnorm_, frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:1080: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
______________ TestSpherical.test_fit_mcgrory_titterington1d[log] ______________
self = <hmmlearn.tests.test_variational_gaussian.TestSpherical object at 0xffff94f03ac0>
implementation = 'log'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_fit_mcgrory_titterington1d(self, implementation):
random_state = check_random_state(234234)
# Setup to assure convergence
sequences, lengths = get_sequences(500, 1,
model=get_mcgrory_titterington(),
rs=random_state)
model = vhmm.VariationalGaussianHMM(
5, n_iter=1000, tol=1e-9, random_state=random_state,
init_params="mc",
covariance_type=self.covariance_type,
implementation=implementation)
vi_uniform_startprob_and_transmat(model, lengths)
> model.fit(sequences, lengths)
hmmlearn/tests/test_variational_gaussian.py:75:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = VariationalGaussianHMM(covariance_type='spherical', init_params='mc',
n_components=5, n_iter=1000,
random_state=RandomState(MT19937) at 0xFFFF94792940,
tol=1e-09)
X = array([[-2.11177854],
[ 1.7446186 ],
[ 2.05590346],
[ 0.02955277],
[ 1.87276123],
[...087992],
[-0.78888704],
[ 3.01000758],
[-1.12831821],
[ 2.30176638],
[-0.90718994]])
def _fit_log(self, X):
framelogprob = self._compute_subnorm_log_likelihood(X)
> logprob, fwdlattice = _hmmc.forward_log(
self.startprob_subnorm_, self.transmat_subnorm_, framelogprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:1091: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
______________ TestSpherical.test_common_initialization[scaling] _______________
self = <hmmlearn.tests.test_variational_gaussian.TestSpherical object at 0xffff94f03bd0>
implementation = 'scaling'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_common_initialization(self, implementation):
sequences, lengths = get_sequences(50, 10,
model=get_mcgrory_titterington())
with pytest.raises(ValueError):
model = vhmm.VariationalGaussianHMM(
4, n_iter=500, tol=1e-9,
covariance_type="incorrect",
implementation=implementation)
assert_log_likelihood_increasing(model, sequences, lengths, 10)
with pytest.raises(ValueError):
model = vhmm.VariationalGaussianHMM(
4, n_iter=500, tol=1e-9,
covariance_type="incorrect",
init_params="",
implementation=implementation)
model.startprob_= np.asarray([.25, .25, .25, .25])
model.score(sequences, lengths)
# Manually setup means - should converge
model = vhmm.VariationalGaussianHMM(
4, n_iter=500, tol=1e-9, init_params="stc",
covariance_type=self.covariance_type,
implementation=implementation)
model.means_prior_ = [[1], [1], [1], [1]]
model.means_posterior_ = [[2], [1], [3], [4]]
model.beta_prior_ = [1, 1, 1, 1]
model.beta_posterior_ = [1, 1, 1, 1]
> assert_log_likelihood_increasing(model, sequences, lengths, 10)
hmmlearn/tests/test_variational_gaussian.py:123:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/tests/__init__.py:46: in assert_log_likelihood_increasing
h.fit(X, lengths=lengths)
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = VariationalGaussianHMM(covariance_type='spherical', implementation='scaling',
init_params='', n_components=4, n_iter=1, tol=1e-09)
X = array([[ 1.58581198],
[-1.43013571],
[ 3.50073686],
[-2.09080284],
[ 1.48390039],
[...711457],
[ 1.8787106 ],
[ 2.31673751],
[ 0.62417883],
[-2.57450891],
[ 0.51093669]])
def _fit_scaling(self, X):
frameprob = self._compute_subnorm_likelihood(X)
> logprob, fwdlattice, scaling_factors = _hmmc.forward_scaling(
self.startprob_subnorm_, self.transmat_subnorm_, frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:1080: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
________________ TestSpherical.test_common_initialization[log] _________________
self = <hmmlearn.tests.test_variational_gaussian.TestSpherical object at 0xffff94aa5950>
implementation = 'log'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_common_initialization(self, implementation):
sequences, lengths = get_sequences(50, 10,
model=get_mcgrory_titterington())
with pytest.raises(ValueError):
model = vhmm.VariationalGaussianHMM(
4, n_iter=500, tol=1e-9,
covariance_type="incorrect",
implementation=implementation)
assert_log_likelihood_increasing(model, sequences, lengths, 10)
with pytest.raises(ValueError):
model = vhmm.VariationalGaussianHMM(
4, n_iter=500, tol=1e-9,
covariance_type="incorrect",
init_params="",
implementation=implementation)
model.startprob_= np.asarray([.25, .25, .25, .25])
model.score(sequences, lengths)
# Manually setup means - should converge
model = vhmm.VariationalGaussianHMM(
4, n_iter=500, tol=1e-9, init_params="stc",
covariance_type=self.covariance_type,
implementation=implementation)
model.means_prior_ = [[1], [1], [1], [1]]
model.means_posterior_ = [[2], [1], [3], [4]]
model.beta_prior_ = [1, 1, 1, 1]
model.beta_posterior_ = [1, 1, 1, 1]
> assert_log_likelihood_increasing(model, sequences, lengths, 10)
hmmlearn/tests/test_variational_gaussian.py:123:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/tests/__init__.py:46: in assert_log_likelihood_increasing
h.fit(X, lengths=lengths)
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = VariationalGaussianHMM(covariance_type='spherical', init_params='',
n_components=4, n_iter=1, tol=1e-09)
X = array([[ 2.55895004],
[ 1.9386079 ],
[-1.14441545],
[ 0.79939524],
[-0.84122716],
[...848896],
[-0.7355048 ],
[-1.27791075],
[-1.53171601],
[ 1.93602005],
[-1.20472876]])
def _fit_log(self, X):
framelogprob = self._compute_subnorm_log_likelihood(X)
> logprob, fwdlattice = _hmmc.forward_log(
self.startprob_subnorm_, self.transmat_subnorm_, framelogprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:1091: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
__________________ TestSpherical.test_initialization[scaling] __________________
self = <hmmlearn.tests.test_variational_gaussian.TestSpherical object at 0xffff94af7390>
implementation = 'scaling'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_initialization(self, implementation):
random_state = check_random_state(234234)
sequences, lengths = get_sequences(
50, 10, model=get_mcgrory_titterington())
# dof's have wrong shape
with pytest.raises(ValueError):
model = self.new_for_init(implementation)
model.dof_prior_ = [1, 1, 1]
assert_log_likelihood_increasing(model, sequences, lengths, 10)
with pytest.raises(ValueError):
model = self.new_for_init(implementation)
model.dof_posterior_ = [1, 1, 1]
assert_log_likelihood_increasing(model, sequences, lengths, 10)
# scales's have wrong shape
with pytest.raises(ValueError):
model = self.new_for_init(implementation)
model.scale_prior_ = [2, 2, 2]
assert_log_likelihood_increasing(model, sequences, lengths, 10)
with pytest.raises(ValueError):
model = self.new_for_init(implementation)
model.scale_posterior_ = [2, 2, 2] # this is wrong
assert_log_likelihood_increasing(model, sequences, lengths, 10)
# Manually setup covariance
with pytest.raises(ValueError):
model = vhmm.VariationalGaussianHMM(
4, n_iter=500, tol=1e-9, init_params="stm",
covariance_type="incorrect",
implementation=implementation)
assert_log_likelihood_increasing(model, sequences, lengths, 10)
# Set priors correctly via params
model = vhmm.VariationalGaussianHMM(
4, n_iter=500, tol=1e-9, random_state=random_state,
covariance_type=self.covariance_type,
implementation=implementation,
means_prior=[[0.], [0.], [0.], [0.]],
beta_prior=[2., 2., 2., 2.],
dof_prior=[2., 2., 2., 2.],
scale_prior=[2, 2, 2, 2],
)
> assert_log_likelihood_increasing(model, sequences, lengths, 10)
hmmlearn/tests/test_variational_gaussian.py:403:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/tests/__init__.py:46: in assert_log_likelihood_increasing
h.fit(X, lengths=lengths)
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = VariationalGaussianHMM(beta_prior=[2.0, 2.0, 2.0, 2.0],
covariance_type='spherical',
... random_state=RandomState(MT19937) at 0xFFFF94886B40,
scale_prior=[2, 2, 2, 2], tol=1e-09)
X = array([[-0.69995355],
[ 1.11732084],
[ 2.34671222],
[ 0.38667263],
[ 0.49315166],
[...586139],
[ 0.81443462],
[-1.66759168],
[ 3.14268492],
[ 3.76227287],
[ 0.80644186]])
def _fit_scaling(self, X):
frameprob = self._compute_subnorm_likelihood(X)
> logprob, fwdlattice, scaling_factors = _hmmc.forward_scaling(
self.startprob_subnorm_, self.transmat_subnorm_, frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:1080: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
____________________ TestSpherical.test_initialization[log] ____________________
self = <hmmlearn.tests.test_variational_gaussian.TestSpherical object at 0xffff94af7750>
implementation = 'log'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_initialization(self, implementation):
random_state = check_random_state(234234)
sequences, lengths = get_sequences(
50, 10, model=get_mcgrory_titterington())
# dof's have wrong shape
with pytest.raises(ValueError):
model = self.new_for_init(implementation)
model.dof_prior_ = [1, 1, 1]
assert_log_likelihood_increasing(model, sequences, lengths, 10)
with pytest.raises(ValueError):
model = self.new_for_init(implementation)
model.dof_posterior_ = [1, 1, 1]
assert_log_likelihood_increasing(model, sequences, lengths, 10)
# scales's have wrong shape
with pytest.raises(ValueError):
model = self.new_for_init(implementation)
model.scale_prior_ = [2, 2, 2]
assert_log_likelihood_increasing(model, sequences, lengths, 10)
with pytest.raises(ValueError):
model = self.new_for_init(implementation)
model.scale_posterior_ = [2, 2, 2] # this is wrong
assert_log_likelihood_increasing(model, sequences, lengths, 10)
# Manually setup covariance
with pytest.raises(ValueError):
model = vhmm.VariationalGaussianHMM(
4, n_iter=500, tol=1e-9, init_params="stm",
covariance_type="incorrect",
implementation=implementation)
assert_log_likelihood_increasing(model, sequences, lengths, 10)
# Set priors correctly via params
model = vhmm.VariationalGaussianHMM(
4, n_iter=500, tol=1e-9, random_state=random_state,
covariance_type=self.covariance_type,
implementation=implementation,
means_prior=[[0.], [0.], [0.], [0.]],
beta_prior=[2., 2., 2., 2.],
dof_prior=[2., 2., 2., 2.],
scale_prior=[2, 2, 2, 2],
)
> assert_log_likelihood_increasing(model, sequences, lengths, 10)
hmmlearn/tests/test_variational_gaussian.py:403:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/tests/__init__.py:46: in assert_log_likelihood_increasing
h.fit(X, lengths=lengths)
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = VariationalGaussianHMM(beta_prior=[2.0, 2.0, 2.0, 2.0],
covariance_type='spherical',
... random_state=RandomState(MT19937) at 0xFFFF94AA7040,
scale_prior=[2, 2, 2, 2], tol=1e-09)
X = array([[ 3.45654067],
[-2.75120263],
[ 2.70685609],
[ 2.19256817],
[-0.71552539],
[...986977],
[-2.05296787],
[ 0.98484479],
[ 2.68913339],
[-0.30012857],
[ 3.23805001]])
def _fit_log(self, X):
framelogprob = self._compute_subnorm_log_likelihood(X)
> logprob, fwdlattice = _hmmc.forward_log(
self.startprob_subnorm_, self.transmat_subnorm_, framelogprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:1091: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
____________________ TestDiagonal.test_random_fit[scaling] _____________________
self = <hmmlearn.tests.test_variational_gaussian.TestDiagonal object at 0xffff949142b0>
implementation = 'scaling', params = 'stmc', n_features = 3, n_components = 3
kwargs = {}
h = GaussianHMM(implementation='scaling', init_params='', n_components=3)
rs = RandomState(MT19937) at 0xFFFF94792240, lengths = [200, 200, 200, 200, 200]
X = array([[ -8.69644052, 7.84695023, 9.16793735],
[ -9.13224583, 7.92499119, 9.31288597],
[ -5.8357..., 9.05969418],
[ 3.16991695, 9.72247605, 12.12314999],
[ 3.09806199, 9.95716109, 11.96433113]])
_state_sequence = array([0, 0, 1, 1, 0, 2, 2, 2, 2, 1, 1, 2, 1, 0, 0, 0, 1, 2, 1, 1, 1, 1,
1, 2, 2, 2, 2, 0, 2, 0, 0, 0, 0, 0, 0,...1, 2, 2,
1, 1, 1, 2, 1, 2, 1, 0, 2, 2, 2, 1, 1, 1, 2, 0, 2, 2, 2, 2, 0, 2,
2, 2, 0, 0, 0, 0, 2, 0, 2, 2])
model = VariationalGaussianHMM(covariance_type='diag', implementation='scaling',
init_params='', n_comp...n_iter=1,
random_state=RandomState(MT19937) at 0xFFFF94792240,
tol=1e-09)
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_random_fit(self, implementation, params='stmc', n_features=3,
n_components=3, **kwargs):
h = hmm.GaussianHMM(n_components, self.covariance_type,
implementation=implementation, init_params="")
rs = check_random_state(1)
h.startprob_ = normalized(rs.rand(n_components))
h.transmat_ = normalized(
rs.rand(n_components, n_components), axis=1)
h.means_ = rs.randint(-20, 20, (n_components, n_features))
h.covars_ = make_covar_matrix(
self.covariance_type, n_components, n_features, random_state=rs)
lengths = [200] * 5
X, _state_sequence = h.sample(sum(lengths), random_state=rs)
# Now learn a model
model = vhmm.VariationalGaussianHMM(
n_components, n_iter=50, tol=1e-9, random_state=rs,
covariance_type=self.covariance_type,
implementation=implementation)
> assert_log_likelihood_increasing(model, X, lengths, n_iter=10)
hmmlearn/tests/test_variational_gaussian.py:59:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/tests/__init__.py:46: in assert_log_likelihood_increasing
h.fit(X, lengths=lengths)
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = VariationalGaussianHMM(covariance_type='diag', implementation='scaling',
init_params='', n_comp...n_iter=1,
random_state=RandomState(MT19937) at 0xFFFF94792240,
tol=1e-09)
X = array([[ -8.69644052, 7.84695023, 9.16793735],
[ -9.13224583, 7.92499119, 9.31288597],
[ -5.8357..., 11.98612945],
[ 2.90646378, 9.9957161 , 11.98128432],
[ -8.65470261, 8.11543755, 8.85803583]])
def _fit_scaling(self, X):
frameprob = self._compute_subnorm_likelihood(X)
> logprob, fwdlattice, scaling_factors = _hmmc.forward_scaling(
self.startprob_subnorm_, self.transmat_subnorm_, frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:1080: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
______________________ TestDiagonal.test_random_fit[log] _______________________
self = <hmmlearn.tests.test_variational_gaussian.TestDiagonal object at 0xffff94914d60>
implementation = 'log', params = 'stmc', n_features = 3, n_components = 3
kwargs = {}, h = GaussianHMM(init_params='', n_components=3)
rs = RandomState(MT19937) at 0xFFFF94790C40, lengths = [200, 200, 200, 200, 200]
X = array([[ -8.69644052, 7.84695023, 9.16793735],
[ -9.13224583, 7.92499119, 9.31288597],
[ -5.8357..., 9.05969418],
[ 3.16991695, 9.72247605, 12.12314999],
[ 3.09806199, 9.95716109, 11.96433113]])
_state_sequence = array([0, 0, 1, 1, 0, 2, 2, 2, 2, 1, 1, 2, 1, 0, 0, 0, 1, 2, 1, 1, 1, 1,
1, 2, 2, 2, 2, 0, 2, 0, 0, 0, 0, 0, 0,...1, 2, 2,
1, 1, 1, 2, 1, 2, 1, 0, 2, 2, 2, 1, 1, 1, 2, 0, 2, 2, 2, 2, 0, 2,
2, 2, 0, 0, 0, 0, 2, 0, 2, 2])
model = VariationalGaussianHMM(covariance_type='diag', init_params='', n_components=3,
n_iter=1,
random_state=RandomState(MT19937) at 0xFFFF94790C40,
tol=1e-09)
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_random_fit(self, implementation, params='stmc', n_features=3,
n_components=3, **kwargs):
h = hmm.GaussianHMM(n_components, self.covariance_type,
implementation=implementation, init_params="")
rs = check_random_state(1)
h.startprob_ = normalized(rs.rand(n_components))
h.transmat_ = normalized(
rs.rand(n_components, n_components), axis=1)
h.means_ = rs.randint(-20, 20, (n_components, n_features))
h.covars_ = make_covar_matrix(
self.covariance_type, n_components, n_features, random_state=rs)
lengths = [200] * 5
X, _state_sequence = h.sample(sum(lengths), random_state=rs)
# Now learn a model
model = vhmm.VariationalGaussianHMM(
n_components, n_iter=50, tol=1e-9, random_state=rs,
covariance_type=self.covariance_type,
implementation=implementation)
> assert_log_likelihood_increasing(model, X, lengths, n_iter=10)
hmmlearn/tests/test_variational_gaussian.py:59:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/tests/__init__.py:46: in assert_log_likelihood_increasing
h.fit(X, lengths=lengths)
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = VariationalGaussianHMM(covariance_type='diag', init_params='', n_components=3,
n_iter=1,
random_state=RandomState(MT19937) at 0xFFFF94790C40,
tol=1e-09)
X = array([[ -8.69644052, 7.84695023, 9.16793735],
[ -9.13224583, 7.92499119, 9.31288597],
[ -5.8357..., 11.98612945],
[ 2.90646378, 9.9957161 , 11.98128432],
[ -8.65470261, 8.11543755, 8.85803583]])
def _fit_log(self, X):
framelogprob = self._compute_subnorm_log_likelihood(X)
> logprob, fwdlattice = _hmmc.forward_log(
self.startprob_subnorm_, self.transmat_subnorm_, framelogprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:1091: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
____________ TestDiagonal.test_fit_mcgrory_titterington1d[scaling] _____________
self = <hmmlearn.tests.test_variational_gaussian.TestDiagonal object at 0xffff949e8cb0>
implementation = 'scaling'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_fit_mcgrory_titterington1d(self, implementation):
random_state = check_random_state(234234)
# Setup to assure convergence
sequences, lengths = get_sequences(500, 1,
model=get_mcgrory_titterington(),
rs=random_state)
model = vhmm.VariationalGaussianHMM(
5, n_iter=1000, tol=1e-9, random_state=random_state,
init_params="mc",
covariance_type=self.covariance_type,
implementation=implementation)
vi_uniform_startprob_and_transmat(model, lengths)
> model.fit(sequences, lengths)
hmmlearn/tests/test_variational_gaussian.py:75:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = VariationalGaussianHMM(covariance_type='diag', implementation='scaling',
init_params='mc', n_co...ter=1000,
random_state=RandomState(MT19937) at 0xFFFF94792840,
tol=1e-09)
X = array([[-2.11177854],
[ 1.7446186 ],
[ 2.05590346],
[ 0.02955277],
[ 1.87276123],
[...087992],
[-0.78888704],
[ 3.01000758],
[-1.12831821],
[ 2.30176638],
[-0.90718994]])
def _fit_scaling(self, X):
frameprob = self._compute_subnorm_likelihood(X)
> logprob, fwdlattice, scaling_factors = _hmmc.forward_scaling(
self.startprob_subnorm_, self.transmat_subnorm_, frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:1080: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
______________ TestDiagonal.test_fit_mcgrory_titterington1d[log] _______________
self = <hmmlearn.tests.test_variational_gaussian.TestDiagonal object at 0xffff94f03ce0>
implementation = 'log'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_fit_mcgrory_titterington1d(self, implementation):
random_state = check_random_state(234234)
# Setup to assure convergence
sequences, lengths = get_sequences(500, 1,
model=get_mcgrory_titterington(),
rs=random_state)
model = vhmm.VariationalGaussianHMM(
5, n_iter=1000, tol=1e-9, random_state=random_state,
init_params="mc",
covariance_type=self.covariance_type,
implementation=implementation)
vi_uniform_startprob_and_transmat(model, lengths)
> model.fit(sequences, lengths)
hmmlearn/tests/test_variational_gaussian.py:75:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = VariationalGaussianHMM(covariance_type='diag', init_params='mc', n_components=5,
n_iter=1000,
random_state=RandomState(MT19937) at 0xFFFF94792D40,
tol=1e-09)
X = array([[-2.11177854],
[ 1.7446186 ],
[ 2.05590346],
[ 0.02955277],
[ 1.87276123],
[...087992],
[-0.78888704],
[ 3.01000758],
[-1.12831821],
[ 2.30176638],
[-0.90718994]])
def _fit_log(self, X):
framelogprob = self._compute_subnorm_log_likelihood(X)
> logprob, fwdlattice = _hmmc.forward_log(
self.startprob_subnorm_, self.transmat_subnorm_, framelogprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:1091: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
_______________ TestDiagonal.test_common_initialization[scaling] _______________
self = <hmmlearn.tests.test_variational_gaussian.TestDiagonal object at 0xffff94f03df0>
implementation = 'scaling'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_common_initialization(self, implementation):
sequences, lengths = get_sequences(50, 10,
model=get_mcgrory_titterington())
with pytest.raises(ValueError):
model = vhmm.VariationalGaussianHMM(
4, n_iter=500, tol=1e-9,
covariance_type="incorrect",
implementation=implementation)
assert_log_likelihood_increasing(model, sequences, lengths, 10)
with pytest.raises(ValueError):
model = vhmm.VariationalGaussianHMM(
4, n_iter=500, tol=1e-9,
covariance_type="incorrect",
init_params="",
implementation=implementation)
model.startprob_= np.asarray([.25, .25, .25, .25])
model.score(sequences, lengths)
# Manually setup means - should converge
model = vhmm.VariationalGaussianHMM(
4, n_iter=500, tol=1e-9, init_params="stc",
covariance_type=self.covariance_type,
implementation=implementation)
model.means_prior_ = [[1], [1], [1], [1]]
model.means_posterior_ = [[2], [1], [3], [4]]
model.beta_prior_ = [1, 1, 1, 1]
model.beta_posterior_ = [1, 1, 1, 1]
> assert_log_likelihood_increasing(model, sequences, lengths, 10)
hmmlearn/tests/test_variational_gaussian.py:123:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/tests/__init__.py:46: in assert_log_likelihood_increasing
h.fit(X, lengths=lengths)
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = VariationalGaussianHMM(covariance_type='diag', implementation='scaling',
init_params='', n_components=4, n_iter=1, tol=1e-09)
X = array([[ 2.94840979],
[-0.4236967 ],
[-1.86164101],
[-2.70760383],
[ 0.52817596],
[...614648],
[ 1.17327289],
[-0.48308756],
[-1.23521059],
[ 2.96221347],
[-2.4055287 ]])
def _fit_scaling(self, X):
frameprob = self._compute_subnorm_likelihood(X)
> logprob, fwdlattice, scaling_factors = _hmmc.forward_scaling(
self.startprob_subnorm_, self.transmat_subnorm_, frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:1080: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
_________________ TestDiagonal.test_common_initialization[log] _________________
self = <hmmlearn.tests.test_variational_gaussian.TestDiagonal object at 0xffff94aa5a50>
implementation = 'log'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_common_initialization(self, implementation):
sequences, lengths = get_sequences(50, 10,
model=get_mcgrory_titterington())
with pytest.raises(ValueError):
model = vhmm.VariationalGaussianHMM(
4, n_iter=500, tol=1e-9,
covariance_type="incorrect",
implementation=implementation)
assert_log_likelihood_increasing(model, sequences, lengths, 10)
with pytest.raises(ValueError):
model = vhmm.VariationalGaussianHMM(
4, n_iter=500, tol=1e-9,
covariance_type="incorrect",
init_params="",
implementation=implementation)
model.startprob_= np.asarray([.25, .25, .25, .25])
model.score(sequences, lengths)
# Manually setup means - should converge
model = vhmm.VariationalGaussianHMM(
4, n_iter=500, tol=1e-9, init_params="stc",
covariance_type=self.covariance_type,
implementation=implementation)
model.means_prior_ = [[1], [1], [1], [1]]
model.means_posterior_ = [[2], [1], [3], [4]]
model.beta_prior_ = [1, 1, 1, 1]
model.beta_posterior_ = [1, 1, 1, 1]
> assert_log_likelihood_increasing(model, sequences, lengths, 10)
hmmlearn/tests/test_variational_gaussian.py:123:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/tests/__init__.py:46: in assert_log_likelihood_increasing
h.fit(X, lengths=lengths)
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = VariationalGaussianHMM(covariance_type='diag', init_params='', n_components=4,
n_iter=1, tol=1e-09)
X = array([[-1.00900958],
[ 1.83548612],
[-1.18687723],
[ 1.39357219],
[ 2.31529054],
[...120186],
[-0.59813352],
[ 1.09476375],
[ 2.7001891 ],
[ 0.25515909],
[-1.58409402]])
def _fit_log(self, X):
framelogprob = self._compute_subnorm_log_likelihood(X)
> logprob, fwdlattice = _hmmc.forward_log(
self.startprob_subnorm_, self.transmat_subnorm_, framelogprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:1091: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
__________________ TestDiagonal.test_initialization[scaling] ___________________
self = <hmmlearn.tests.test_variational_gaussian.TestDiagonal object at 0xffff94af7890>
implementation = 'scaling'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_initialization(self, implementation):
random_state = check_random_state(234234)
sequences, lengths = get_sequences(
50, 10, model=get_mcgrory_titterington())
# dof's have wrong shape
with pytest.raises(ValueError):
model = self.new_for_init(implementation)
model.dof_prior_ = [1, 1, 1]
assert_log_likelihood_increasing(model, sequences, lengths, 10)
with pytest.raises(ValueError):
model = self.new_for_init(implementation)
model.dof_posterior_ = [1, 1, 1]
assert_log_likelihood_increasing(model, sequences, lengths, 10)
# scales's have wrong shape
with pytest.raises(ValueError):
model = self.new_for_init(implementation)
model.scale_prior_ = [[2], [2], [2]]
assert_log_likelihood_increasing(model, sequences, lengths, 10)
with pytest.raises(ValueError):
model = vhmm.VariationalGaussianHMM(
4, n_iter=500, tol=1e-9, init_params="stm",
covariance_type=self.covariance_type,
implementation=implementation)
model.dof_prior_ = [1, 1, 1, 1]
model.dof_posterior_ = [1, 1, 1, 1]
model.scale_prior_ = [[2], [2], [2], [2]]
model.scale_posterior_ = [[2, 2, 2]] # this is wrong
assert_log_likelihood_increasing(model, sequences, lengths, 10)
# Set priors correctly via params
model = vhmm.VariationalGaussianHMM(
4, n_iter=500, tol=1e-9, random_state=random_state,
covariance_type=self.covariance_type,
implementation=implementation,
means_prior=[[0.], [0.], [0.], [0.]],
beta_prior=[2., 2., 2., 2.],
dof_prior=[2., 2., 2., 2.],
scale_prior=[[2], [2], [2], [2]]
)
> assert_log_likelihood_increasing(model, sequences, lengths, 10)
hmmlearn/tests/test_variational_gaussian.py:486:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/tests/__init__.py:46: in assert_log_likelihood_increasing
h.fit(X, lengths=lengths)
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = VariationalGaussianHMM(beta_prior=[2.0, 2.0, 2.0, 2.0], covariance_type='diag',
dof_prior=[2.0,...andom_state=RandomState(MT19937) at 0xFFFF94792640,
scale_prior=[[2], [2], [2], [2]], tol=1e-09)
X = array([[ 0.37725899],
[ 3.11738285],
[-0.09163979],
[ 1.69939899],
[ 1.17211122],
[...975532],
[-1.29219785],
[-2.21400016],
[-0.12401679],
[ 3.5650227 ],
[-0.33847644]])
def _fit_scaling(self, X):
frameprob = self._compute_subnorm_likelihood(X)
> logprob, fwdlattice, scaling_factors = _hmmc.forward_scaling(
self.startprob_subnorm_, self.transmat_subnorm_, frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:1080: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
____________________ TestDiagonal.test_initialization[log] _____________________
self = <hmmlearn.tests.test_variational_gaussian.TestDiagonal object at 0xffff94af79d0>
implementation = 'log'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_initialization(self, implementation):
random_state = check_random_state(234234)
sequences, lengths = get_sequences(
50, 10, model=get_mcgrory_titterington())
# dof's have wrong shape
with pytest.raises(ValueError):
model = self.new_for_init(implementation)
model.dof_prior_ = [1, 1, 1]
assert_log_likelihood_increasing(model, sequences, lengths, 10)
with pytest.raises(ValueError):
model = self.new_for_init(implementation)
model.dof_posterior_ = [1, 1, 1]
assert_log_likelihood_increasing(model, sequences, lengths, 10)
# scales's have wrong shape
with pytest.raises(ValueError):
model = self.new_for_init(implementation)
model.scale_prior_ = [[2], [2], [2]]
assert_log_likelihood_increasing(model, sequences, lengths, 10)
with pytest.raises(ValueError):
model = vhmm.VariationalGaussianHMM(
4, n_iter=500, tol=1e-9, init_params="stm",
covariance_type=self.covariance_type,
implementation=implementation)
model.dof_prior_ = [1, 1, 1, 1]
model.dof_posterior_ = [1, 1, 1, 1]
model.scale_prior_ = [[2], [2], [2], [2]]
model.scale_posterior_ = [[2, 2, 2]] # this is wrong
assert_log_likelihood_increasing(model, sequences, lengths, 10)
# Set priors correctly via params
model = vhmm.VariationalGaussianHMM(
4, n_iter=500, tol=1e-9, random_state=random_state,
covariance_type=self.covariance_type,
implementation=implementation,
means_prior=[[0.], [0.], [0.], [0.]],
beta_prior=[2., 2., 2., 2.],
dof_prior=[2., 2., 2., 2.],
scale_prior=[[2], [2], [2], [2]]
)
> assert_log_likelihood_increasing(model, sequences, lengths, 10)
hmmlearn/tests/test_variational_gaussian.py:486:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/tests/__init__.py:46: in assert_log_likelihood_increasing
h.fit(X, lengths=lengths)
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = VariationalGaussianHMM(beta_prior=[2.0, 2.0, 2.0, 2.0], covariance_type='diag',
dof_prior=[2.0,...andom_state=RandomState(MT19937) at 0xFFFF94792940,
scale_prior=[[2], [2], [2], [2]], tol=1e-09)
X = array([[-1.50974603],
[ 0.66501942],
[ 1.03376567],
[-0.33821964],
[-0.03369866],
[...945696],
[ 1.03948035],
[ 3.29548267],
[-1.67415189],
[-0.95330419],
[ 2.79920426]])
def _fit_log(self, X):
framelogprob = self._compute_subnorm_log_likelihood(X)
> logprob, fwdlattice = _hmmc.forward_log(
self.startprob_subnorm_, self.transmat_subnorm_, framelogprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:1091: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
=============================== warnings summary ===============================
.pybuild/cpython3_3.13_hmmlearn/build/hmmlearn/tests/test_variational_categorical.py: 9 warnings
.pybuild/cpython3_3.13_hmmlearn/build/hmmlearn/tests/test_variational_gaussian.py: 15 warnings
/<<PKGBUILDDIR>>/.pybuild/cpython3_3.13_hmmlearn/build/hmmlearn/base.py:1192: RuntimeWarning: underflow encountered in exp
self.startprob_subnorm_ = np.exp(startprob_log_subnorm)
.pybuild/cpython3_3.13_hmmlearn/build/hmmlearn/tests/test_variational_categorical.py: 7 warnings
.pybuild/cpython3_3.13_hmmlearn/build/hmmlearn/tests/test_variational_gaussian.py: 13 warnings
/<<PKGBUILDDIR>>/.pybuild/cpython3_3.13_hmmlearn/build/hmmlearn/base.py:1197: RuntimeWarning: underflow encountered in exp
self.transmat_subnorm_ = np.exp(transmat_log_subnorm)
.pybuild/cpython3_3.13_hmmlearn/build/hmmlearn/tests/test_variational_categorical.py::TestVariationalCategorical::test_fit_beal[scaling]
/<<PKGBUILDDIR>>/.pybuild/cpython3_3.13_hmmlearn/build/hmmlearn/base.py:1130: RuntimeWarning: underflow encountered in exp
return np.exp(self._compute_subnorm_log_likelihood(X))
-- Docs: https://docs.pytest.org/en/stable/how-to/capture-warnings.html
=========================== short test summary info ============================
FAILED hmmlearn/tests/test_base.py::TestBaseAgainstWikipedia::test_do_forward_scaling_pass
FAILED hmmlearn/tests/test_base.py::TestBaseAgainstWikipedia::test_do_forward_pass
FAILED hmmlearn/tests/test_base.py::TestBaseAgainstWikipedia::test_do_backward_scaling_pass
FAILED hmmlearn/tests/test_base.py::TestBaseAgainstWikipedia::test_do_viterbi_pass
FAILED hmmlearn/tests/test_base.py::TestBaseAgainstWikipedia::test_score_samples
FAILED hmmlearn/tests/test_base.py::TestBaseConsistentWithGMM::test_score_samples
FAILED hmmlearn/tests/test_base.py::TestBaseConsistentWithGMM::test_decode - ...
FAILED hmmlearn/tests/test_categorical_hmm.py::TestCategoricalAgainstWikipedia::test_decode_viterbi[scaling]
FAILED hmmlearn/tests/test_categorical_hmm.py::TestCategoricalAgainstWikipedia::test_decode_viterbi[log]
FAILED hmmlearn/tests/test_categorical_hmm.py::TestCategoricalAgainstWikipedia::test_decode_map[scaling]
FAILED hmmlearn/tests/test_categorical_hmm.py::TestCategoricalAgainstWikipedia::test_decode_map[log]
FAILED hmmlearn/tests/test_categorical_hmm.py::TestCategoricalAgainstWikipedia::test_predict[scaling]
FAILED hmmlearn/tests/test_categorical_hmm.py::TestCategoricalAgainstWikipedia::test_predict[log]
FAILED hmmlearn/tests/test_categorical_hmm.py::TestCategoricalHMM::test_n_features[scaling]
FAILED hmmlearn/tests/test_categorical_hmm.py::TestCategoricalHMM::test_n_features[log]
FAILED hmmlearn/tests/test_categorical_hmm.py::TestCategoricalHMM::test_score_samples[scaling]
FAILED hmmlearn/tests/test_categorical_hmm.py::TestCategoricalHMM::test_score_samples[log]
FAILED hmmlearn/tests/test_categorical_hmm.py::TestCategoricalHMM::test_fit[scaling]
FAILED hmmlearn/tests/test_categorical_hmm.py::TestCategoricalHMM::test_fit[log]
FAILED hmmlearn/tests/test_categorical_hmm.py::TestCategoricalHMM::test_fit_emissionprob[scaling]
FAILED hmmlearn/tests/test_categorical_hmm.py::TestCategoricalHMM::test_fit_emissionprob[log]
FAILED hmmlearn/tests/test_categorical_hmm.py::TestCategoricalHMM::test_fit_with_init[scaling]
FAILED hmmlearn/tests/test_categorical_hmm.py::TestCategoricalHMM::test_fit_with_init[log]
FAILED hmmlearn/tests/test_gaussian_hmm.py::TestGaussianHMMWithSphericalCovars::test_score_samples_and_decode[scaling]
FAILED hmmlearn/tests/test_gaussian_hmm.py::TestGaussianHMMWithSphericalCovars::test_score_samples_and_decode[log]
FAILED hmmlearn/tests/test_gaussian_hmm.py::TestGaussianHMMWithSphericalCovars::test_fit[scaling]
FAILED hmmlearn/tests/test_gaussian_hmm.py::TestGaussianHMMWithSphericalCovars::test_fit[log]
FAILED hmmlearn/tests/test_gaussian_hmm.py::TestGaussianHMMWithSphericalCovars::test_criterion[scaling]
FAILED hmmlearn/tests/test_gaussian_hmm.py::TestGaussianHMMWithSphericalCovars::test_criterion[log]
FAILED hmmlearn/tests/test_gaussian_hmm.py::TestGaussianHMMWithSphericalCovars::test_fit_ignored_init_warns[scaling]
FAILED hmmlearn/tests/test_gaussian_hmm.py::TestGaussianHMMWithSphericalCovars::test_fit_ignored_init_warns[log]
FAILED hmmlearn/tests/test_gaussian_hmm.py::TestGaussianHMMWithSphericalCovars::test_fit_sequences_of_different_length[scaling]
FAILED hmmlearn/tests/test_gaussian_hmm.py::TestGaussianHMMWithSphericalCovars::test_fit_sequences_of_different_length[log]
FAILED hmmlearn/tests/test_gaussian_hmm.py::TestGaussianHMMWithSphericalCovars::test_fit_with_length_one_signal[scaling]
FAILED hmmlearn/tests/test_gaussian_hmm.py::TestGaussianHMMWithSphericalCovars::test_fit_with_length_one_signal[log]
FAILED hmmlearn/tests/test_gaussian_hmm.py::TestGaussianHMMWithSphericalCovars::test_fit_zero_variance[scaling]
FAILED hmmlearn/tests/test_gaussian_hmm.py::TestGaussianHMMWithSphericalCovars::test_fit_zero_variance[log]
FAILED hmmlearn/tests/test_gaussian_hmm.py::TestGaussianHMMWithSphericalCovars::test_fit_with_priors[scaling]
FAILED hmmlearn/tests/test_gaussian_hmm.py::TestGaussianHMMWithSphericalCovars::test_fit_with_priors[log]
FAILED hmmlearn/tests/test_gaussian_hmm.py::TestGaussianHMMWithSphericalCovars::test_fit_startprob_and_transmat[scaling]
FAILED hmmlearn/tests/test_gaussian_hmm.py::TestGaussianHMMWithSphericalCovars::test_fit_startprob_and_transmat[log]
FAILED hmmlearn/tests/test_gaussian_hmm.py::TestGaussianHMMWithSphericalCovars::test_underflow_from_scaling[log]
FAILED hmmlearn/tests/test_gaussian_hmm.py::TestGaussianHMMWithDiagonalCovars::test_score_samples_and_decode[scaling]
FAILED hmmlearn/tests/test_gaussian_hmm.py::TestGaussianHMMWithDiagonalCovars::test_score_samples_and_decode[log]
FAILED hmmlearn/tests/test_gaussian_hmm.py::TestGaussianHMMWithDiagonalCovars::test_fit[scaling]
FAILED hmmlearn/tests/test_gaussian_hmm.py::TestGaussianHMMWithDiagonalCovars::test_fit[log]
FAILED hmmlearn/tests/test_gaussian_hmm.py::TestGaussianHMMWithDiagonalCovars::test_criterion[scaling]
FAILED hmmlearn/tests/test_gaussian_hmm.py::TestGaussianHMMWithDiagonalCovars::test_criterion[log]
FAILED hmmlearn/tests/test_gaussian_hmm.py::TestGaussianHMMWithDiagonalCovars::test_fit_ignored_init_warns[scaling]
FAILED hmmlearn/tests/test_gaussian_hmm.py::TestGaussianHMMWithDiagonalCovars::test_fit_ignored_init_warns[log]
FAILED hmmlearn/tests/test_gaussian_hmm.py::TestGaussianHMMWithDiagonalCovars::test_fit_sequences_of_different_length[scaling]
FAILED hmmlearn/tests/test_gaussian_hmm.py::TestGaussianHMMWithDiagonalCovars::test_fit_sequences_of_different_length[log]
FAILED hmmlearn/tests/test_gaussian_hmm.py::TestGaussianHMMWithDiagonalCovars::test_fit_with_length_one_signal[scaling]
FAILED hmmlearn/tests/test_gaussian_hmm.py::TestGaussianHMMWithDiagonalCovars::test_fit_with_length_one_signal[log]
FAILED hmmlearn/tests/test_gaussian_hmm.py::TestGaussianHMMWithDiagonalCovars::test_fit_zero_variance[scaling]
FAILED hmmlearn/tests/test_gaussian_hmm.py::TestGaussianHMMWithDiagonalCovars::test_fit_zero_variance[log]
FAILED hmmlearn/tests/test_gaussian_hmm.py::TestGaussianHMMWithDiagonalCovars::test_fit_with_priors[scaling]
FAILED hmmlearn/tests/test_gaussian_hmm.py::TestGaussianHMMWithDiagonalCovars::test_fit_with_priors[log]
FAILED hmmlearn/tests/test_gaussian_hmm.py::TestGaussianHMMWithDiagonalCovars::test_fit_left_right[scaling]
FAILED hmmlearn/tests/test_gaussian_hmm.py::TestGaussianHMMWithDiagonalCovars::test_fit_left_right[log]
FAILED hmmlearn/tests/test_gaussian_hmm.py::TestGaussianHMMWithTiedCovars::test_score_samples_and_decode[scaling]
FAILED hmmlearn/tests/test_gaussian_hmm.py::TestGaussianHMMWithTiedCovars::test_score_samples_and_decode[log]
FAILED hmmlearn/tests/test_gaussian_hmm.py::TestGaussianHMMWithTiedCovars::test_fit[scaling]
FAILED hmmlearn/tests/test_gaussian_hmm.py::TestGaussianHMMWithTiedCovars::test_fit[log]
FAILED hmmlearn/tests/test_gaussian_hmm.py::TestGaussianHMMWithTiedCovars::test_criterion[scaling]
FAILED hmmlearn/tests/test_gaussian_hmm.py::TestGaussianHMMWithTiedCovars::test_criterion[log]
FAILED hmmlearn/tests/test_gaussian_hmm.py::TestGaussianHMMWithTiedCovars::test_fit_ignored_init_warns[scaling]
FAILED hmmlearn/tests/test_gaussian_hmm.py::TestGaussianHMMWithTiedCovars::test_fit_ignored_init_warns[log]
FAILED hmmlearn/tests/test_gaussian_hmm.py::TestGaussianHMMWithTiedCovars::test_fit_sequences_of_different_length[scaling]
FAILED hmmlearn/tests/test_gaussian_hmm.py::TestGaussianHMMWithTiedCovars::test_fit_sequences_of_different_length[log]
FAILED hmmlearn/tests/test_gaussian_hmm.py::TestGaussianHMMWithTiedCovars::test_fit_with_length_one_signal[scaling]
FAILED hmmlearn/tests/test_gaussian_hmm.py::TestGaussianHMMWithTiedCovars::test_fit_with_length_one_signal[log]
FAILED hmmlearn/tests/test_gaussian_hmm.py::TestGaussianHMMWithTiedCovars::test_fit_zero_variance[scaling]
FAILED hmmlearn/tests/test_gaussian_hmm.py::TestGaussianHMMWithTiedCovars::test_fit_zero_variance[log]
FAILED hmmlearn/tests/test_gaussian_hmm.py::TestGaussianHMMWithTiedCovars::test_fit_with_priors[scaling]
FAILED hmmlearn/tests/test_gaussian_hmm.py::TestGaussianHMMWithTiedCovars::test_fit_with_priors[log]
FAILED hmmlearn/tests/test_gaussian_hmm.py::TestGaussianHMMWithFullCovars::test_score_samples_and_decode[scaling]
FAILED hmmlearn/tests/test_gaussian_hmm.py::TestGaussianHMMWithFullCovars::test_score_samples_and_decode[log]
FAILED hmmlearn/tests/test_gaussian_hmm.py::TestGaussianHMMWithFullCovars::test_fit[scaling]
FAILED hmmlearn/tests/test_gaussian_hmm.py::TestGaussianHMMWithFullCovars::test_fit[log]
FAILED hmmlearn/tests/test_gaussian_hmm.py::TestGaussianHMMWithFullCovars::test_criterion[scaling]
FAILED hmmlearn/tests/test_gaussian_hmm.py::TestGaussianHMMWithFullCovars::test_criterion[log]
FAILED hmmlearn/tests/test_gaussian_hmm.py::TestGaussianHMMWithFullCovars::test_fit_ignored_init_warns[scaling]
FAILED hmmlearn/tests/test_gaussian_hmm.py::TestGaussianHMMWithFullCovars::test_fit_ignored_init_warns[log]
FAILED hmmlearn/tests/test_gaussian_hmm.py::TestGaussianHMMWithFullCovars::test_fit_sequences_of_different_length[scaling]
FAILED hmmlearn/tests/test_gaussian_hmm.py::TestGaussianHMMWithFullCovars::test_fit_sequences_of_different_length[log]
FAILED hmmlearn/tests/test_gaussian_hmm.py::TestGaussianHMMWithFullCovars::test_fit_with_length_one_signal[scaling]
FAILED hmmlearn/tests/test_gaussian_hmm.py::TestGaussianHMMWithFullCovars::test_fit_with_length_one_signal[log]
FAILED hmmlearn/tests/test_gaussian_hmm.py::TestGaussianHMMWithFullCovars::test_fit_zero_variance[scaling]
FAILED hmmlearn/tests/test_gaussian_hmm.py::TestGaussianHMMWithFullCovars::test_fit_zero_variance[log]
FAILED hmmlearn/tests/test_gaussian_hmm.py::TestGaussianHMMWithFullCovars::test_fit_with_priors[scaling]
FAILED hmmlearn/tests/test_gaussian_hmm.py::TestGaussianHMMWithFullCovars::test_fit_with_priors[log]
FAILED hmmlearn/tests/test_gmm_hmm_multisequence.py::test_gmmhmm_multi_sequence_fit_invariant_to_sequence_ordering[scaling-diag]
FAILED hmmlearn/tests/test_gmm_hmm_multisequence.py::test_gmmhmm_multi_sequence_fit_invariant_to_sequence_ordering[scaling-spherical]
FAILED hmmlearn/tests/test_gmm_hmm_multisequence.py::test_gmmhmm_multi_sequence_fit_invariant_to_sequence_ordering[scaling-tied]
FAILED hmmlearn/tests/test_gmm_hmm_multisequence.py::test_gmmhmm_multi_sequence_fit_invariant_to_sequence_ordering[scaling-full]
FAILED hmmlearn/tests/test_gmm_hmm_multisequence.py::test_gmmhmm_multi_sequence_fit_invariant_to_sequence_ordering[log-diag]
FAILED hmmlearn/tests/test_gmm_hmm_multisequence.py::test_gmmhmm_multi_sequence_fit_invariant_to_sequence_ordering[log-spherical]
FAILED hmmlearn/tests/test_gmm_hmm_multisequence.py::test_gmmhmm_multi_sequence_fit_invariant_to_sequence_ordering[log-tied]
FAILED hmmlearn/tests/test_gmm_hmm_multisequence.py::test_gmmhmm_multi_sequence_fit_invariant_to_sequence_ordering[log-full]
FAILED hmmlearn/tests/test_gmm_hmm_new.py::TestGMMHMMWithSphericalCovars::test_score_samples_and_decode[scaling]
FAILED hmmlearn/tests/test_gmm_hmm_new.py::TestGMMHMMWithSphericalCovars::test_score_samples_and_decode[log]
FAILED hmmlearn/tests/test_gmm_hmm_new.py::TestGMMHMMWithSphericalCovars::test_fit[scaling]
FAILED hmmlearn/tests/test_gmm_hmm_new.py::TestGMMHMMWithSphericalCovars::test_fit[log]
FAILED hmmlearn/tests/test_gmm_hmm_new.py::TestGMMHMMWithSphericalCovars::test_fit_sparse_data[scaling]
FAILED hmmlearn/tests/test_gmm_hmm_new.py::TestGMMHMMWithSphericalCovars::test_fit_sparse_data[log]
FAILED hmmlearn/tests/test_gmm_hmm_new.py::TestGMMHMMWithSphericalCovars::test_criterion[scaling]
FAILED hmmlearn/tests/test_gmm_hmm_new.py::TestGMMHMMWithSphericalCovars::test_criterion[log]
FAILED hmmlearn/tests/test_gmm_hmm_new.py::TestGMMHMMWithDiagCovars::test_score_samples_and_decode[scaling]
FAILED hmmlearn/tests/test_gmm_hmm_new.py::TestGMMHMMWithDiagCovars::test_score_samples_and_decode[log]
FAILED hmmlearn/tests/test_gmm_hmm_new.py::TestGMMHMMWithDiagCovars::test_fit[scaling]
FAILED hmmlearn/tests/test_gmm_hmm_new.py::TestGMMHMMWithDiagCovars::test_fit[log]
FAILED hmmlearn/tests/test_gmm_hmm_new.py::TestGMMHMMWithDiagCovars::test_fit_sparse_data[scaling]
FAILED hmmlearn/tests/test_gmm_hmm_new.py::TestGMMHMMWithDiagCovars::test_fit_sparse_data[log]
FAILED hmmlearn/tests/test_gmm_hmm_new.py::TestGMMHMMWithDiagCovars::test_criterion[scaling]
FAILED hmmlearn/tests/test_gmm_hmm_new.py::TestGMMHMMWithDiagCovars::test_criterion[log]
FAILED hmmlearn/tests/test_gmm_hmm_new.py::TestGMMHMMWithTiedCovars::test_score_samples_and_decode[scaling]
FAILED hmmlearn/tests/test_gmm_hmm_new.py::TestGMMHMMWithTiedCovars::test_score_samples_and_decode[log]
FAILED hmmlearn/tests/test_gmm_hmm_new.py::TestGMMHMMWithTiedCovars::test_fit[scaling]
FAILED hmmlearn/tests/test_gmm_hmm_new.py::TestGMMHMMWithTiedCovars::test_fit[log]
FAILED hmmlearn/tests/test_gmm_hmm_new.py::TestGMMHMMWithTiedCovars::test_fit_sparse_data[scaling]
FAILED hmmlearn/tests/test_gmm_hmm_new.py::TestGMMHMMWithTiedCovars::test_fit_sparse_data[log]
FAILED hmmlearn/tests/test_gmm_hmm_new.py::TestGMMHMMWithTiedCovars::test_criterion[scaling]
FAILED hmmlearn/tests/test_gmm_hmm_new.py::TestGMMHMMWithTiedCovars::test_criterion[log]
FAILED hmmlearn/tests/test_gmm_hmm_new.py::TestGMMHMMWithFullCovars::test_score_samples_and_decode[scaling]
FAILED hmmlearn/tests/test_gmm_hmm_new.py::TestGMMHMMWithFullCovars::test_score_samples_and_decode[log]
FAILED hmmlearn/tests/test_gmm_hmm_new.py::TestGMMHMMWithFullCovars::test_fit[scaling]
FAILED hmmlearn/tests/test_gmm_hmm_new.py::TestGMMHMMWithFullCovars::test_fit[log]
FAILED hmmlearn/tests/test_gmm_hmm_new.py::TestGMMHMMWithFullCovars::test_fit_sparse_data[scaling]
FAILED hmmlearn/tests/test_gmm_hmm_new.py::TestGMMHMMWithFullCovars::test_fit_sparse_data[log]
FAILED hmmlearn/tests/test_gmm_hmm_new.py::TestGMMHMMWithFullCovars::test_criterion[scaling]
FAILED hmmlearn/tests/test_gmm_hmm_new.py::TestGMMHMMWithFullCovars::test_criterion[log]
FAILED hmmlearn/tests/test_gmm_hmm_new.py::TestGMMHMM_KmeansInit::test_kmeans[scaling]
FAILED hmmlearn/tests/test_gmm_hmm_new.py::TestGMMHMM_KmeansInit::test_kmeans[log]
FAILED hmmlearn/tests/test_gmm_hmm_new.py::TestGMMHMM_MultiSequence::test_chunked[diag]
FAILED hmmlearn/tests/test_gmm_hmm_new.py::TestGMMHMM_MultiSequence::test_chunked[spherical]
FAILED hmmlearn/tests/test_gmm_hmm_new.py::TestGMMHMM_MultiSequence::test_chunked[tied]
FAILED hmmlearn/tests/test_gmm_hmm_new.py::TestGMMHMM_MultiSequence::test_chunked[full]
FAILED hmmlearn/tests/test_multinomial_hmm.py::TestMultinomialHMM::test_score_samples[scaling]
FAILED hmmlearn/tests/test_multinomial_hmm.py::TestMultinomialHMM::test_score_samples[log]
FAILED hmmlearn/tests/test_multinomial_hmm.py::TestMultinomialHMM::test_fit[scaling]
FAILED hmmlearn/tests/test_multinomial_hmm.py::TestMultinomialHMM::test_fit[log]
FAILED hmmlearn/tests/test_multinomial_hmm.py::TestMultinomialHMM::test_fit_emissionprob[scaling]
FAILED hmmlearn/tests/test_multinomial_hmm.py::TestMultinomialHMM::test_fit_emissionprob[log]
FAILED hmmlearn/tests/test_multinomial_hmm.py::TestMultinomialHMM::test_fit_with_init[scaling]
FAILED hmmlearn/tests/test_multinomial_hmm.py::TestMultinomialHMM::test_fit_with_init[log]
FAILED hmmlearn/tests/test_multinomial_hmm.py::TestMultinomialHMM::test_compare_with_categorical_hmm[scaling]
FAILED hmmlearn/tests/test_multinomial_hmm.py::TestMultinomialHMM::test_compare_with_categorical_hmm[log]
FAILED hmmlearn/tests/test_poisson_hmm.py::TestPoissonHMM::test_score_samples[scaling]
FAILED hmmlearn/tests/test_poisson_hmm.py::TestPoissonHMM::test_score_samples[log]
FAILED hmmlearn/tests/test_poisson_hmm.py::TestPoissonHMM::test_fit[scaling]
FAILED hmmlearn/tests/test_poisson_hmm.py::TestPoissonHMM::test_fit[log] - Ru...
FAILED hmmlearn/tests/test_poisson_hmm.py::TestPoissonHMM::test_fit_lambdas[scaling]
FAILED hmmlearn/tests/test_poisson_hmm.py::TestPoissonHMM::test_fit_lambdas[log]
FAILED hmmlearn/tests/test_poisson_hmm.py::TestPoissonHMM::test_fit_with_init[scaling]
FAILED hmmlearn/tests/test_poisson_hmm.py::TestPoissonHMM::test_fit_with_init[log]
FAILED hmmlearn/tests/test_poisson_hmm.py::TestPoissonHMM::test_criterion[scaling]
FAILED hmmlearn/tests/test_poisson_hmm.py::TestPoissonHMM::test_criterion[log]
FAILED hmmlearn/tests/test_variational_categorical.py::TestVariationalCategorical::test_init_priors[scaling]
FAILED hmmlearn/tests/test_variational_categorical.py::TestVariationalCategorical::test_init_priors[log]
FAILED hmmlearn/tests/test_variational_categorical.py::TestVariationalCategorical::test_n_features[scaling]
FAILED hmmlearn/tests/test_variational_categorical.py::TestVariationalCategorical::test_n_features[log]
FAILED hmmlearn/tests/test_variational_categorical.py::TestVariationalCategorical::test_init_incorrect_priors[scaling]
FAILED hmmlearn/tests/test_variational_categorical.py::TestVariationalCategorical::test_init_incorrect_priors[log]
FAILED hmmlearn/tests/test_variational_categorical.py::TestVariationalCategorical::test_fit_beal[scaling]
FAILED hmmlearn/tests/test_variational_categorical.py::TestVariationalCategorical::test_fit_beal[log]
FAILED hmmlearn/tests/test_variational_categorical.py::TestVariationalCategorical::test_fit_and_compare_with_em[scaling]
FAILED hmmlearn/tests/test_variational_categorical.py::TestVariationalCategorical::test_fit_and_compare_with_em[log]
FAILED hmmlearn/tests/test_variational_categorical.py::TestVariationalCategorical::test_fit_length_1_sequences[scaling]
FAILED hmmlearn/tests/test_variational_categorical.py::TestVariationalCategorical::test_fit_length_1_sequences[log]
FAILED hmmlearn/tests/test_variational_gaussian.py::TestFull::test_random_fit[scaling]
FAILED hmmlearn/tests/test_variational_gaussian.py::TestFull::test_random_fit[log]
FAILED hmmlearn/tests/test_variational_gaussian.py::TestFull::test_fit_mcgrory_titterington1d[scaling]
FAILED hmmlearn/tests/test_variational_gaussian.py::TestFull::test_fit_mcgrory_titterington1d[log]
FAILED hmmlearn/tests/test_variational_gaussian.py::TestFull::test_common_initialization[scaling]
FAILED hmmlearn/tests/test_variational_gaussian.py::TestFull::test_common_initialization[log]
FAILED hmmlearn/tests/test_variational_gaussian.py::TestFull::test_initialization[scaling]
FAILED hmmlearn/tests/test_variational_gaussian.py::TestFull::test_initialization[log]
FAILED hmmlearn/tests/test_variational_gaussian.py::TestTied::test_random_fit[scaling]
FAILED hmmlearn/tests/test_variational_gaussian.py::TestTied::test_random_fit[log]
FAILED hmmlearn/tests/test_variational_gaussian.py::TestTied::test_fit_mcgrory_titterington1d[scaling]
FAILED hmmlearn/tests/test_variational_gaussian.py::TestTied::test_fit_mcgrory_titterington1d[log]
FAILED hmmlearn/tests/test_variational_gaussian.py::TestTied::test_common_initialization[scaling]
FAILED hmmlearn/tests/test_variational_gaussian.py::TestTied::test_common_initialization[log]
FAILED hmmlearn/tests/test_variational_gaussian.py::TestTied::test_initialization[scaling]
FAILED hmmlearn/tests/test_variational_gaussian.py::TestTied::test_initialization[log]
FAILED hmmlearn/tests/test_variational_gaussian.py::TestSpherical::test_random_fit[scaling]
FAILED hmmlearn/tests/test_variational_gaussian.py::TestSpherical::test_random_fit[log]
FAILED hmmlearn/tests/test_variational_gaussian.py::TestSpherical::test_fit_mcgrory_titterington1d[scaling]
FAILED hmmlearn/tests/test_variational_gaussian.py::TestSpherical::test_fit_mcgrory_titterington1d[log]
FAILED hmmlearn/tests/test_variational_gaussian.py::TestSpherical::test_common_initialization[scaling]
FAILED hmmlearn/tests/test_variational_gaussian.py::TestSpherical::test_common_initialization[log]
FAILED hmmlearn/tests/test_variational_gaussian.py::TestSpherical::test_initialization[scaling]
FAILED hmmlearn/tests/test_variational_gaussian.py::TestSpherical::test_initialization[log]
FAILED hmmlearn/tests/test_variational_gaussian.py::TestDiagonal::test_random_fit[scaling]
FAILED hmmlearn/tests/test_variational_gaussian.py::TestDiagonal::test_random_fit[log]
FAILED hmmlearn/tests/test_variational_gaussian.py::TestDiagonal::test_fit_mcgrory_titterington1d[scaling]
FAILED hmmlearn/tests/test_variational_gaussian.py::TestDiagonal::test_fit_mcgrory_titterington1d[log]
FAILED hmmlearn/tests/test_variational_gaussian.py::TestDiagonal::test_common_initialization[scaling]
FAILED hmmlearn/tests/test_variational_gaussian.py::TestDiagonal::test_common_initialization[log]
FAILED hmmlearn/tests/test_variational_gaussian.py::TestDiagonal::test_initialization[scaling]
FAILED hmmlearn/tests/test_variational_gaussian.py::TestDiagonal::test_initialization[log]
=========== 202 failed, 92 passed, 26 xfailed, 45 warnings in 25.64s ===========
E: pybuild pybuild:389: test: plugin pyproject failed with: exit code=1: cd /<<PKGBUILDDIR>>/.pybuild/cpython3_3.13_hmmlearn/build; python3.13 -m pytest --pyargs hmmlearn
I: pybuild base:311: cd /<<PKGBUILDDIR>>/.pybuild/cpython3_3.12_hmmlearn/build; python3.12 -m pytest --pyargs hmmlearn
set RNG seed to 981710331
============================= test session starts ==============================
platform linux -- Python 3.12.7, pytest-8.3.3, pluggy-1.5.0
rootdir: /<<PKGBUILDDIR>>
configfile: setup.cfg
plugins: typeguard-4.3.0
collected 320 items
hmmlearn/tests/test_base.py .....FFF.FF.FF.. [ 5%]
hmmlearn/tests/test_categorical_hmm.py FFFFFFFF..FF..FFFFFF.. [ 11%]
hmmlearn/tests/test_gaussian_hmm.py ..FF..FFFFFF..FFFFFFFF..FF.F..FF..FF [ 23%]
FFFF..FFFFFFFF..FF..FF..FFFFFF..FFFFFFFF..FF..FFFFFF..FFFFFFFF [ 42%]
hmmlearn/tests/test_gmm_hmm.py xxxxxxxxxxxxxxxxxx [ 48%]
hmmlearn/tests/test_gmm_hmm_multisequence.py FFFFFFFF [ 50%]
hmmlearn/tests/test_gmm_hmm_new.py ........FFFFFFxxFF........FFFFFFxxFF. [ 62%]
.......FFFFFFxxFF........FFFFFFxxFFFFFFFF [ 75%]
hmmlearn/tests/test_kl_divergence.py ..... [ 76%]
hmmlearn/tests/test_multinomial_hmm.py ..FF..FFFFFF..FF [ 81%]
hmmlearn/tests/test_poisson_hmm.py ..FFFFFFFFFF [ 85%]
hmmlearn/tests/test_utils.py ... [ 86%]
hmmlearn/tests/test_variational_categorical.py FFFFFFFFFFFF [ 90%]
hmmlearn/tests/test_variational_gaussian.py FFFFFFFFFFFFFFFFFFFFFFFFFFFF [ 98%]
FFFF [100%]
=================================== FAILURES ===================================
____________ TestBaseAgainstWikipedia.test_do_forward_scaling_pass _____________
self = <hmmlearn.tests.test_base.TestBaseAgainstWikipedia object at 0xffff7b3c8890>
def test_do_forward_scaling_pass(self):
> log_prob, fwdlattice, scaling_factors = _hmmc.forward_scaling(
self.hmm.startprob_, self.hmm.transmat_, self.frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/tests/test_base.py:79: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
________________ TestBaseAgainstWikipedia.test_do_forward_pass _________________
self = <hmmlearn.tests.test_base.TestBaseAgainstWikipedia object at 0xffff7a9c0050>
def test_do_forward_pass(self):
> log_prob, fwdlattice = _hmmc.forward_log(
self.hmm.startprob_, self.hmm.transmat_, self.log_frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/tests/test_base.py:91: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
____________ TestBaseAgainstWikipedia.test_do_backward_scaling_pass ____________
self = <hmmlearn.tests.test_base.TestBaseAgainstWikipedia object at 0xffff7a9c0170>
def test_do_backward_scaling_pass(self):
> log_prob, fwdlattice, scaling_factors = _hmmc.forward_scaling(
self.hmm.startprob_, self.hmm.transmat_, self.frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/tests/test_base.py:104: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
________________ TestBaseAgainstWikipedia.test_do_viterbi_pass _________________
self = <hmmlearn.tests.test_base.TestBaseAgainstWikipedia object at 0xffff7a9c0410>
def test_do_viterbi_pass(self):
> log_prob, state_sequence = _hmmc.viterbi(
self.hmm.startprob_, self.hmm.transmat_, self.log_frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/tests/test_base.py:129: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
_________________ TestBaseAgainstWikipedia.test_score_samples __________________
self = <hmmlearn.tests.test_base.TestBaseAgainstWikipedia object at 0xffff7a9c05f0>
def test_score_samples(self):
# ``StubHMM` ignores the values in ```X``, so we just pass in an
# array of the appropriate shape.
> log_prob, posteriors = self.hmm.score_samples(self.log_frameprob)
hmmlearn/tests/test_base.py:139:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:201: in score_samples
return self._score(X, lengths, compute_posteriors=True)
hmmlearn/base.py:244: in _score
return impl(
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = StubHMM(n_components=2)
X = array([[-0.10536052, -1.60943791],
[-0.10536052, -1.60943791],
[-2.30258509, -0.22314355],
[-0.10536052, -1.60943791],
[-0.10536052, -1.60943791]])
lengths = None
def _score_log(self, X, lengths=None, *, compute_posteriors):
"""
Compute the log probability under the model, as well as posteriors if
*compute_posteriors* is True (otherwise, an empty array is returned
for the latter).
"""
log_prob = 0
sub_posteriors = [np.empty((0, self.n_components))]
for sub_X in _utils.split_X_lengths(X, lengths):
log_frameprob = self._compute_log_likelihood(sub_X)
> log_probij, fwdlattice = _hmmc.forward_log(
self.startprob_, self.transmat_, log_frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:257: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
_________________ TestBaseConsistentWithGMM.test_score_samples _________________
self = <hmmlearn.tests.test_base.TestBaseConsistentWithGMM object at 0xffff7a9c0ad0>
def test_score_samples(self):
> log_prob, hmmposteriors = self.hmm.score_samples(self.log_frameprob)
hmmlearn/tests/test_base.py:177:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:201: in score_samples
return self._score(X, lengths, compute_posteriors=True)
hmmlearn/base.py:244: in _score
return impl(
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = StubHMM(n_components=8)
X = array([[-2.7325315 , -1.63179913, -0.02149307, -1.07198435, -1.72078328,
-0.36669203, -3.20520959, -4.24135406... [-0.4663203 , -3.05398868, -0.6031281 , -0.07574733, -0.01520237,
-0.11555031, -0.68771153, -1.2969431 ]])
lengths = None
def _score_log(self, X, lengths=None, *, compute_posteriors):
"""
Compute the log probability under the model, as well as posteriors if
*compute_posteriors* is True (otherwise, an empty array is returned
for the latter).
"""
log_prob = 0
sub_posteriors = [np.empty((0, self.n_components))]
for sub_X in _utils.split_X_lengths(X, lengths):
log_frameprob = self._compute_log_likelihood(sub_X)
> log_probij, fwdlattice = _hmmc.forward_log(
self.startprob_, self.transmat_, log_frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:257: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
____________________ TestBaseConsistentWithGMM.test_decode _____________________
self = <hmmlearn.tests.test_base.TestBaseConsistentWithGMM object at 0xffff7a9c0c80>
def test_decode(self):
> _log_prob, state_sequence = self.hmm.decode(self.log_frameprob)
hmmlearn/tests/test_base.py:188:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:340: in decode
sub_log_prob, sub_state_sequence = decoder(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = StubHMM(n_components=8)
X = array([[-0.90819829, -0.05959104, -0.80104442, -1.61850617, -0.19224396,
-0.40660186, -0.17785689, -0.43418718... [-0.07771956, -0.66251783, -0.81845254, -0.0392787 , -2.2668067 ,
-0.10733117, -0.88807175, -0.04544324]])
def _decode_viterbi(self, X):
log_frameprob = self._compute_log_likelihood(X)
> return _hmmc.viterbi(self.startprob_, self.transmat_, log_frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:286: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
_________ TestCategoricalAgainstWikipedia.test_decode_viterbi[scaling] _________
self = <hmmlearn.tests.test_categorical_hmm.TestCategoricalAgainstWikipedia object at 0xffff79dd61e0>
implementation = 'scaling'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_decode_viterbi(self, implementation):
# From http://en.wikipedia.org/wiki/Viterbi_algorithm:
# "This reveals that the observations ['walk', 'shop', 'clean']
# were most likely generated by states ['Sunny', 'Rainy', 'Rainy'],
# with probability 0.01344."
h = self.new_hmm(implementation)
X = [[0], [1], [2]]
> log_prob, state_sequence = h.decode(X, algorithm="viterbi")
hmmlearn/tests/test_categorical_hmm.py:37:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/_emissions.py:27: in <lambda>
return functools.wraps(func)(lambda *args, **kwargs: func(*args, **kwargs))
hmmlearn/base.py:340: in decode
sub_log_prob, sub_state_sequence = decoder(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = CategoricalHMM(implementation='scaling', n_components=2, n_features=3)
X = array([[0],
[1],
[2]])
def _decode_viterbi(self, X):
log_frameprob = self._compute_log_likelihood(X)
> return _hmmc.viterbi(self.startprob_, self.transmat_, log_frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:286: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
___________ TestCategoricalAgainstWikipedia.test_decode_viterbi[log] ___________
self = <hmmlearn.tests.test_categorical_hmm.TestCategoricalAgainstWikipedia object at 0xffff7aa0da30>
implementation = 'log'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_decode_viterbi(self, implementation):
# From http://en.wikipedia.org/wiki/Viterbi_algorithm:
# "This reveals that the observations ['walk', 'shop', 'clean']
# were most likely generated by states ['Sunny', 'Rainy', 'Rainy'],
# with probability 0.01344."
h = self.new_hmm(implementation)
X = [[0], [1], [2]]
> log_prob, state_sequence = h.decode(X, algorithm="viterbi")
hmmlearn/tests/test_categorical_hmm.py:37:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/_emissions.py:27: in <lambda>
return functools.wraps(func)(lambda *args, **kwargs: func(*args, **kwargs))
hmmlearn/base.py:340: in decode
sub_log_prob, sub_state_sequence = decoder(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = CategoricalHMM(n_components=2, n_features=3)
X = array([[0],
[1],
[2]])
def _decode_viterbi(self, X):
log_frameprob = self._compute_log_likelihood(X)
> return _hmmc.viterbi(self.startprob_, self.transmat_, log_frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:286: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
___________ TestCategoricalAgainstWikipedia.test_decode_map[scaling] ___________
self = <hmmlearn.tests.test_categorical_hmm.TestCategoricalAgainstWikipedia object at 0xffff7940e8d0>
implementation = 'scaling'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_decode_map(self, implementation):
X = [[0], [1], [2]]
h = self.new_hmm(implementation)
> _log_prob, state_sequence = h.decode(X, algorithm="map")
hmmlearn/tests/test_categorical_hmm.py:45:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/_emissions.py:27: in <lambda>
return functools.wraps(func)(lambda *args, **kwargs: func(*args, **kwargs))
hmmlearn/base.py:340: in decode
sub_log_prob, sub_state_sequence = decoder(sub_X)
hmmlearn/base.py:289: in _decode_map
_, posteriors = self.score_samples(X)
hmmlearn/_emissions.py:27: in <lambda>
return functools.wraps(func)(lambda *args, **kwargs: func(*args, **kwargs))
hmmlearn/base.py:201: in score_samples
return self._score(X, lengths, compute_posteriors=True)
hmmlearn/base.py:244: in _score
return impl(
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = CategoricalHMM(implementation='scaling', n_components=2, n_features=3)
X = array([[0],
[1],
[2]]), lengths = None
def _score_scaling(self, X, lengths=None, *, compute_posteriors):
log_prob = 0
sub_posteriors = [np.empty((0, self.n_components))]
for sub_X in _utils.split_X_lengths(X, lengths):
frameprob = self._compute_likelihood(sub_X)
> log_probij, fwdlattice, scaling_factors = _hmmc.forward_scaling(
self.startprob_, self.transmat_, frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:272: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
_____________ TestCategoricalAgainstWikipedia.test_decode_map[log] _____________
self = <hmmlearn.tests.test_categorical_hmm.TestCategoricalAgainstWikipedia object at 0xffff7940f380>
implementation = 'log'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_decode_map(self, implementation):
X = [[0], [1], [2]]
h = self.new_hmm(implementation)
> _log_prob, state_sequence = h.decode(X, algorithm="map")
hmmlearn/tests/test_categorical_hmm.py:45:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/_emissions.py:27: in <lambda>
return functools.wraps(func)(lambda *args, **kwargs: func(*args, **kwargs))
hmmlearn/base.py:340: in decode
sub_log_prob, sub_state_sequence = decoder(sub_X)
hmmlearn/base.py:289: in _decode_map
_, posteriors = self.score_samples(X)
hmmlearn/_emissions.py:27: in <lambda>
return functools.wraps(func)(lambda *args, **kwargs: func(*args, **kwargs))
hmmlearn/base.py:201: in score_samples
return self._score(X, lengths, compute_posteriors=True)
hmmlearn/base.py:244: in _score
return impl(
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = CategoricalHMM(n_components=2, n_features=3)
X = array([[0],
[1],
[2]]), lengths = None
def _score_log(self, X, lengths=None, *, compute_posteriors):
"""
Compute the log probability under the model, as well as posteriors if
*compute_posteriors* is True (otherwise, an empty array is returned
for the latter).
"""
log_prob = 0
sub_posteriors = [np.empty((0, self.n_components))]
for sub_X in _utils.split_X_lengths(X, lengths):
log_frameprob = self._compute_log_likelihood(sub_X)
> log_probij, fwdlattice = _hmmc.forward_log(
self.startprob_, self.transmat_, log_frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:257: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
____________ TestCategoricalAgainstWikipedia.test_predict[scaling] _____________
self = <hmmlearn.tests.test_categorical_hmm.TestCategoricalAgainstWikipedia object at 0xffff7940f5c0>
implementation = 'scaling'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_predict(self, implementation):
X = [[0], [1], [2]]
h = self.new_hmm(implementation)
> state_sequence = h.predict(X)
hmmlearn/tests/test_categorical_hmm.py:52:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/_emissions.py:27: in <lambda>
return functools.wraps(func)(lambda *args, **kwargs: func(*args, **kwargs))
hmmlearn/base.py:363: in predict
_, state_sequence = self.decode(X, lengths)
hmmlearn/_emissions.py:27: in <lambda>
return functools.wraps(func)(lambda *args, **kwargs: func(*args, **kwargs))
hmmlearn/base.py:340: in decode
sub_log_prob, sub_state_sequence = decoder(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = CategoricalHMM(implementation='scaling', n_components=2, n_features=3)
X = array([[0],
[1],
[2]])
def _decode_viterbi(self, X):
log_frameprob = self._compute_log_likelihood(X)
> return _hmmc.viterbi(self.startprob_, self.transmat_, log_frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:286: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
______________ TestCategoricalAgainstWikipedia.test_predict[log] _______________
self = <hmmlearn.tests.test_categorical_hmm.TestCategoricalAgainstWikipedia object at 0xffff7940f740>
implementation = 'log'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_predict(self, implementation):
X = [[0], [1], [2]]
h = self.new_hmm(implementation)
> state_sequence = h.predict(X)
hmmlearn/tests/test_categorical_hmm.py:52:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/_emissions.py:27: in <lambda>
return functools.wraps(func)(lambda *args, **kwargs: func(*args, **kwargs))
hmmlearn/base.py:363: in predict
_, state_sequence = self.decode(X, lengths)
hmmlearn/_emissions.py:27: in <lambda>
return functools.wraps(func)(lambda *args, **kwargs: func(*args, **kwargs))
hmmlearn/base.py:340: in decode
sub_log_prob, sub_state_sequence = decoder(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = CategoricalHMM(n_components=2, n_features=3)
X = array([[0],
[1],
[2]])
def _decode_viterbi(self, X):
log_frameprob = self._compute_log_likelihood(X)
> return _hmmc.viterbi(self.startprob_, self.transmat_, log_frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:286: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
_________________ TestCategoricalHMM.test_n_features[scaling] __________________
self = <hmmlearn.tests.test_categorical_hmm.TestCategoricalHMM object at 0xffff7940f8c0>
implementation = 'scaling'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_n_features(self, implementation):
sequences, _ = self.new_hmm(implementation).sample(500)
# set n_features
model = hmm.CategoricalHMM(
n_components=2, implementation=implementation)
> assert_log_likelihood_increasing(model, sequences, [500], 10)
hmmlearn/tests/test_categorical_hmm.py:80:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/tests/__init__.py:46: in assert_log_likelihood_increasing
h.fit(X, lengths=lengths)
hmmlearn/_emissions.py:27: in <lambda>
return functools.wraps(func)(lambda *args, **kwargs: func(*args, **kwargs))
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = CategoricalHMM(implementation='scaling', init_params='', n_components=2,
n_features=3, n_iter=1,
random_state=RandomState(MT19937) at 0xFFFF7F6E3640)
X = array([[1],
[1],
[1],
[2],
[1],
[0],
[0],
[0],
[1],
[0]... [2],
[2],
[2],
[1],
[2],
[0],
[1],
[2],
[0],
[1]])
def _fit_scaling(self, X):
frameprob = self._compute_likelihood(X)
> log_prob, fwdlattice, scaling_factors = _hmmc.forward_scaling(
self.startprob_, self.transmat_, frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:855: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
___________________ TestCategoricalHMM.test_n_features[log] ____________________
self = <hmmlearn.tests.test_categorical_hmm.TestCategoricalHMM object at 0xffff7940fa40>
implementation = 'log'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_n_features(self, implementation):
sequences, _ = self.new_hmm(implementation).sample(500)
# set n_features
model = hmm.CategoricalHMM(
n_components=2, implementation=implementation)
> assert_log_likelihood_increasing(model, sequences, [500], 10)
hmmlearn/tests/test_categorical_hmm.py:80:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/tests/__init__.py:46: in assert_log_likelihood_increasing
h.fit(X, lengths=lengths)
hmmlearn/_emissions.py:27: in <lambda>
return functools.wraps(func)(lambda *args, **kwargs: func(*args, **kwargs))
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = CategoricalHMM(init_params='', n_components=2, n_features=3, n_iter=1,
random_state=RandomState(MT19937) at 0xFFFF7F6E3640)
X = array([[0],
[0],
[1],
[0],
[2],
[0],
[0],
[2],
[2],
[2]... [1],
[1],
[0],
[1],
[0],
[0],
[2],
[1],
[1],
[0]])
def _fit_log(self, X):
log_frameprob = self._compute_log_likelihood(X)
> log_prob, fwdlattice = _hmmc.forward_log(
self.startprob_, self.transmat_, log_frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:864: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
________________ TestCategoricalHMM.test_score_samples[scaling] ________________
self = <hmmlearn.tests.test_categorical_hmm.TestCategoricalHMM object at 0xffff7940ffb0>
implementation = 'scaling'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_score_samples(self, implementation):
idx = np.repeat(np.arange(self.n_components), 10)
n_samples = len(idx)
X = np.random.randint(self.n_features, size=(n_samples, 1))
h = self.new_hmm(implementation)
> ll, posteriors = h.score_samples(X)
hmmlearn/tests/test_categorical_hmm.py:110:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/_emissions.py:27: in <lambda>
return functools.wraps(func)(lambda *args, **kwargs: func(*args, **kwargs))
hmmlearn/base.py:201: in score_samples
return self._score(X, lengths, compute_posteriors=True)
hmmlearn/base.py:244: in _score
return impl(
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = CategoricalHMM(implementation='scaling', n_components=2, n_features=3)
X = array([[0],
[1],
[2],
[2],
[2],
[1],
[2],
[0],
[0],
[1],
[1],
[2],
[2],
[1],
[2],
[0],
[2],
[1],
[1],
[2]])
lengths = None
def _score_scaling(self, X, lengths=None, *, compute_posteriors):
log_prob = 0
sub_posteriors = [np.empty((0, self.n_components))]
for sub_X in _utils.split_X_lengths(X, lengths):
frameprob = self._compute_likelihood(sub_X)
> log_probij, fwdlattice, scaling_factors = _hmmc.forward_scaling(
self.startprob_, self.transmat_, frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:272: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
__________________ TestCategoricalHMM.test_score_samples[log] __________________
self = <hmmlearn.tests.test_categorical_hmm.TestCategoricalHMM object at 0xffff794242f0>
implementation = 'log'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_score_samples(self, implementation):
idx = np.repeat(np.arange(self.n_components), 10)
n_samples = len(idx)
X = np.random.randint(self.n_features, size=(n_samples, 1))
h = self.new_hmm(implementation)
> ll, posteriors = h.score_samples(X)
hmmlearn/tests/test_categorical_hmm.py:110:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/_emissions.py:27: in <lambda>
return functools.wraps(func)(lambda *args, **kwargs: func(*args, **kwargs))
hmmlearn/base.py:201: in score_samples
return self._score(X, lengths, compute_posteriors=True)
hmmlearn/base.py:244: in _score
return impl(
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = CategoricalHMM(n_components=2, n_features=3)
X = array([[1],
[1],
[0],
[0],
[1],
[0],
[2],
[1],
[0],
[0],
[2],
[0],
[1],
[0],
[1],
[0],
[1],
[0],
[1],
[0]])
lengths = None
def _score_log(self, X, lengths=None, *, compute_posteriors):
"""
Compute the log probability under the model, as well as posteriors if
*compute_posteriors* is True (otherwise, an empty array is returned
for the latter).
"""
log_prob = 0
sub_posteriors = [np.empty((0, self.n_components))]
for sub_X in _utils.split_X_lengths(X, lengths):
log_frameprob = self._compute_log_likelihood(sub_X)
> log_probij, fwdlattice = _hmmc.forward_log(
self.startprob_, self.transmat_, log_frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:257: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
_____________________ TestCategoricalHMM.test_fit[scaling] _____________________
self = <hmmlearn.tests.test_categorical_hmm.TestCategoricalHMM object at 0xffff79424890>
implementation = 'scaling', params = 'ste', n_iter = 5
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_fit(self, implementation, params='ste', n_iter=5):
h = self.new_hmm(implementation)
h.params = params
lengths = np.array([10] * 10)
X, _state_sequence = h.sample(lengths.sum())
# Mess up the parameters and see if we can re-learn them.
h.startprob_ = normalized(np.random.random(self.n_components))
h.transmat_ = normalized(
np.random.random((self.n_components, self.n_components)),
axis=1)
h.emissionprob_ = normalized(
np.random.random((self.n_components, self.n_features)),
axis=1)
> assert_log_likelihood_increasing(h, X, lengths, n_iter)
hmmlearn/tests/test_categorical_hmm.py:140:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/tests/__init__.py:46: in assert_log_likelihood_increasing
h.fit(X, lengths=lengths)
hmmlearn/_emissions.py:27: in <lambda>
return functools.wraps(func)(lambda *args, **kwargs: func(*args, **kwargs))
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = CategoricalHMM(implementation='scaling', init_params='', n_components=2,
n_features=3, n_iter=1,
random_state=RandomState(MT19937) at 0xFFFF7F6E3640)
X = array([[0],
[0],
[0],
[1],
[0],
[2],
[2],
[0],
[2],
[2]])
def _fit_scaling(self, X):
frameprob = self._compute_likelihood(X)
> log_prob, fwdlattice, scaling_factors = _hmmc.forward_scaling(
self.startprob_, self.transmat_, frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:855: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
_______________________ TestCategoricalHMM.test_fit[log] _______________________
self = <hmmlearn.tests.test_categorical_hmm.TestCategoricalHMM object at 0xffff79424a10>
implementation = 'log', params = 'ste', n_iter = 5
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_fit(self, implementation, params='ste', n_iter=5):
h = self.new_hmm(implementation)
h.params = params
lengths = np.array([10] * 10)
X, _state_sequence = h.sample(lengths.sum())
# Mess up the parameters and see if we can re-learn them.
h.startprob_ = normalized(np.random.random(self.n_components))
h.transmat_ = normalized(
np.random.random((self.n_components, self.n_components)),
axis=1)
h.emissionprob_ = normalized(
np.random.random((self.n_components, self.n_features)),
axis=1)
> assert_log_likelihood_increasing(h, X, lengths, n_iter)
hmmlearn/tests/test_categorical_hmm.py:140:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/tests/__init__.py:46: in assert_log_likelihood_increasing
h.fit(X, lengths=lengths)
hmmlearn/_emissions.py:27: in <lambda>
return functools.wraps(func)(lambda *args, **kwargs: func(*args, **kwargs))
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = CategoricalHMM(init_params='', n_components=2, n_features=3, n_iter=1,
random_state=RandomState(MT19937) at 0xFFFF7F6E3640)
X = array([[2],
[2],
[2],
[1],
[1],
[0],
[2],
[0],
[0],
[1]])
def _fit_log(self, X):
log_frameprob = self._compute_log_likelihood(X)
> log_prob, fwdlattice = _hmmc.forward_log(
self.startprob_, self.transmat_, log_frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:864: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
______________ TestCategoricalHMM.test_fit_emissionprob[scaling] _______________
self = <hmmlearn.tests.test_categorical_hmm.TestCategoricalHMM object at 0xffff79424c20>
implementation = 'scaling'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_fit_emissionprob(self, implementation):
> self.test_fit(implementation, 'e')
hmmlearn/tests/test_categorical_hmm.py:144:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/tests/test_categorical_hmm.py:140: in test_fit
assert_log_likelihood_increasing(h, X, lengths, n_iter)
hmmlearn/tests/__init__.py:46: in assert_log_likelihood_increasing
h.fit(X, lengths=lengths)
hmmlearn/_emissions.py:27: in <lambda>
return functools.wraps(func)(lambda *args, **kwargs: func(*args, **kwargs))
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = CategoricalHMM(implementation='scaling', init_params='', n_components=2,
n_features=3, n_iter=1, params='e',
random_state=RandomState(MT19937) at 0xFFFF7F6E3640)
X = array([[1],
[1],
[1],
[0],
[2],
[2],
[1],
[2],
[2],
[1]])
def _fit_scaling(self, X):
frameprob = self._compute_likelihood(X)
> log_prob, fwdlattice, scaling_factors = _hmmc.forward_scaling(
self.startprob_, self.transmat_, frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:855: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
________________ TestCategoricalHMM.test_fit_emissionprob[log] _________________
self = <hmmlearn.tests.test_categorical_hmm.TestCategoricalHMM object at 0xffff79424da0>
implementation = 'log'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_fit_emissionprob(self, implementation):
> self.test_fit(implementation, 'e')
hmmlearn/tests/test_categorical_hmm.py:144:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/tests/test_categorical_hmm.py:140: in test_fit
assert_log_likelihood_increasing(h, X, lengths, n_iter)
hmmlearn/tests/__init__.py:46: in assert_log_likelihood_increasing
h.fit(X, lengths=lengths)
hmmlearn/_emissions.py:27: in <lambda>
return functools.wraps(func)(lambda *args, **kwargs: func(*args, **kwargs))
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = CategoricalHMM(init_params='', n_components=2, n_features=3, n_iter=1,
params='e', random_state=RandomState(MT19937) at 0xFFFF7F6E3640)
X = array([[1],
[0],
[2],
[0],
[1],
[2],
[0],
[0],
[0],
[1]])
def _fit_log(self, X):
log_frameprob = self._compute_log_likelihood(X)
> log_prob, fwdlattice = _hmmc.forward_log(
self.startprob_, self.transmat_, log_frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:864: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
________________ TestCategoricalHMM.test_fit_with_init[scaling] ________________
self = <hmmlearn.tests.test_categorical_hmm.TestCategoricalHMM object at 0xffff79424fb0>
implementation = 'scaling', params = 'ste', n_iter = 5
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_fit_with_init(self, implementation, params='ste', n_iter=5):
lengths = [10] * 10
h = self.new_hmm(implementation)
X, _state_sequence = h.sample(sum(lengths))
# use init_function to initialize paramerters
h = hmm.CategoricalHMM(self.n_components, params=params,
init_params=params)
h._init(X, lengths)
> assert_log_likelihood_increasing(h, X, lengths, n_iter)
hmmlearn/tests/test_categorical_hmm.py:157:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/tests/__init__.py:46: in assert_log_likelihood_increasing
h.fit(X, lengths=lengths)
hmmlearn/_emissions.py:27: in <lambda>
return functools.wraps(func)(lambda *args, **kwargs: func(*args, **kwargs))
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = CategoricalHMM(init_params='', n_components=2, n_features=3, n_iter=1,
random_state=RandomState(MT19937) at 0xFFFF7F6E3640)
X = array([[1],
[0],
[0],
[1],
[1],
[1],
[1],
[1],
[1],
[0]])
def _fit_log(self, X):
log_frameprob = self._compute_log_likelihood(X)
> log_prob, fwdlattice = _hmmc.forward_log(
self.startprob_, self.transmat_, log_frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:864: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
__________________ TestCategoricalHMM.test_fit_with_init[log] __________________
self = <hmmlearn.tests.test_categorical_hmm.TestCategoricalHMM object at 0xffff79425130>
implementation = 'log', params = 'ste', n_iter = 5
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_fit_with_init(self, implementation, params='ste', n_iter=5):
lengths = [10] * 10
h = self.new_hmm(implementation)
X, _state_sequence = h.sample(sum(lengths))
# use init_function to initialize paramerters
h = hmm.CategoricalHMM(self.n_components, params=params,
init_params=params)
h._init(X, lengths)
> assert_log_likelihood_increasing(h, X, lengths, n_iter)
hmmlearn/tests/test_categorical_hmm.py:157:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/tests/__init__.py:46: in assert_log_likelihood_increasing
h.fit(X, lengths=lengths)
hmmlearn/_emissions.py:27: in <lambda>
return functools.wraps(func)(lambda *args, **kwargs: func(*args, **kwargs))
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = CategoricalHMM(init_params='', n_components=2, n_features=3, n_iter=1,
random_state=RandomState(MT19937) at 0xFFFF7F6E3640)
X = array([[1],
[0],
[0],
[0],
[0],
[2],
[2],
[2],
[2],
[2]])
def _fit_log(self, X):
log_frameprob = self._compute_log_likelihood(X)
> log_prob, fwdlattice = _hmmc.forward_log(
self.startprob_, self.transmat_, log_frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:864: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
__ TestGaussianHMMWithSphericalCovars.test_score_samples_and_decode[scaling] ___
self = <hmmlearn.tests.test_gaussian_hmm.TestGaussianHMMWithSphericalCovars object at 0xffff79427200>
implementation = 'scaling'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_score_samples_and_decode(self, implementation):
h = hmm.GaussianHMM(self.n_components, self.covariance_type,
init_params="st", implementation=implementation)
h.means_ = self.means
h.covars_ = self.covars
# Make sure the means are far apart so posteriors.argmax()
# picks the actual component used to generate the observations.
h.means_ = 20 * h.means_
gaussidx = np.repeat(np.arange(self.n_components), 5)
n_samples = len(gaussidx)
X = (self.prng.randn(n_samples, self.n_features)
+ h.means_[gaussidx])
h._init(X, [n_samples])
> ll, posteriors = h.score_samples(X)
hmmlearn/tests/test_gaussian_hmm.py:52:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:201: in score_samples
return self._score(X, lengths, compute_posteriors=True)
hmmlearn/base.py:244: in _score
return impl(
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GaussianHMM(covariance_type='spherical', implementation='scaling',
init_params='st', n_components=3)
X = array([[-179.56000798, 79.57176561, 259.68798732],
[-180.56888339, 78.41505899, 261.05535316],
[-1...6363279 ],
[-140.61081384, -301.3193914 , -140.56172842],
[-139.79461543, -300.95336068, -139.67848205]])
lengths = None
def _score_scaling(self, X, lengths=None, *, compute_posteriors):
log_prob = 0
sub_posteriors = [np.empty((0, self.n_components))]
for sub_X in _utils.split_X_lengths(X, lengths):
frameprob = self._compute_likelihood(sub_X)
> log_probij, fwdlattice, scaling_factors = _hmmc.forward_scaling(
self.startprob_, self.transmat_, frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:272: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
____ TestGaussianHMMWithSphericalCovars.test_score_samples_and_decode[log] _____
self = <hmmlearn.tests.test_gaussian_hmm.TestGaussianHMMWithSphericalCovars object at 0xffff79427380>
implementation = 'log'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_score_samples_and_decode(self, implementation):
h = hmm.GaussianHMM(self.n_components, self.covariance_type,
init_params="st", implementation=implementation)
h.means_ = self.means
h.covars_ = self.covars
# Make sure the means are far apart so posteriors.argmax()
# picks the actual component used to generate the observations.
h.means_ = 20 * h.means_
gaussidx = np.repeat(np.arange(self.n_components), 5)
n_samples = len(gaussidx)
X = (self.prng.randn(n_samples, self.n_features)
+ h.means_[gaussidx])
h._init(X, [n_samples])
> ll, posteriors = h.score_samples(X)
hmmlearn/tests/test_gaussian_hmm.py:52:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:201: in score_samples
return self._score(X, lengths, compute_posteriors=True)
hmmlearn/base.py:244: in _score
return impl(
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GaussianHMM(covariance_type='spherical', init_params='st', n_components=3)
X = array([[-179.56000798, 79.57176561, 259.68798732],
[-180.56888339, 78.41505899, 261.05535316],
[-1...6363279 ],
[-140.61081384, -301.3193914 , -140.56172842],
[-139.79461543, -300.95336068, -139.67848205]])
lengths = None
def _score_log(self, X, lengths=None, *, compute_posteriors):
"""
Compute the log probability under the model, as well as posteriors if
*compute_posteriors* is True (otherwise, an empty array is returned
for the latter).
"""
log_prob = 0
sub_posteriors = [np.empty((0, self.n_components))]
for sub_X in _utils.split_X_lengths(X, lengths):
log_frameprob = self._compute_log_likelihood(sub_X)
> log_probij, fwdlattice = _hmmc.forward_log(
self.startprob_, self.transmat_, log_frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:257: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
_____________ TestGaussianHMMWithSphericalCovars.test_fit[scaling] _____________
self = <hmmlearn.tests.test_gaussian_hmm.TestGaussianHMMWithSphericalCovars object at 0xffff79427920>
implementation = 'scaling', params = 'stmc', n_iter = 5, kwargs = {}
h = GaussianHMM(covariance_type='spherical', implementation='scaling',
n_components=3)
lengths = [10, 10, 10, 10, 10, 10, ...]
X = array([[-180.22405794, 80.14919183, 259.7276458 ],
[-179.90124348, 79.8556723 , 259.97708883],
[-2...95007275],
[-139.97005487, -299.93792764, -140.04085163],
[-239.95188158, 320.03972951, -119.97272471]])
_state_sequence = array([0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 2, 2, 1, 0, 0, 2, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 2, 2, 1, 1, 1,...2,
2, 1, 1, 1, 1, 1, 1, 1, 1, 1, 2, 1, 1, 1, 1, 1, 0, 0, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 2, 1])
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_fit(self, implementation, params='stmc', n_iter=5, **kwargs):
h = hmm.GaussianHMM(self.n_components, self.covariance_type,
implementation=implementation)
h.startprob_ = self.startprob
h.transmat_ = normalized(
self.transmat + np.diag(self.prng.rand(self.n_components)), 1)
h.means_ = 20 * self.means
h.covars_ = self.covars
lengths = [10] * 10
X, _state_sequence = h.sample(sum(lengths), random_state=self.prng)
# Mess up the parameters and see if we can re-learn them.
# TODO: change the params and uncomment the check
> h.fit(X, lengths=lengths)
hmmlearn/tests/test_gaussian_hmm.py:89:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GaussianHMM(covariance_type='spherical', implementation='scaling',
n_components=3)
X = array([[-180.22405794, 80.14919183, 259.7276458 ],
[-179.90124348, 79.8556723 , 259.97708883],
[-2...02423171],
[-240.11318548, 319.89135278, -120.23468395],
[-240.09991625, 319.74125997, -119.91965919]])
def _fit_scaling(self, X):
frameprob = self._compute_likelihood(X)
> log_prob, fwdlattice, scaling_factors = _hmmc.forward_scaling(
self.startprob_, self.transmat_, frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:855: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
------------------------------ Captured log call -------------------------------
WARNING hmmlearn.base:base.py:514 Even though the 'startprob_' attribute is set, it will be overwritten during initialization because 'init_params' contains 's'
WARNING hmmlearn.base:base.py:514 Even though the 'transmat_' attribute is set, it will be overwritten during initialization because 'init_params' contains 't'
WARNING hmmlearn.base:base.py:514 Even though the 'means_' attribute is set, it will be overwritten during initialization because 'init_params' contains 'm'
WARNING hmmlearn.base:base.py:514 Even though the 'covars_' attribute is set, it will be overwritten during initialization because 'init_params' contains 'c'
_______________ TestGaussianHMMWithSphericalCovars.test_fit[log] _______________
self = <hmmlearn.tests.test_gaussian_hmm.TestGaussianHMMWithSphericalCovars object at 0xffff79427b00>
implementation = 'log', params = 'stmc', n_iter = 5, kwargs = {}
h = GaussianHMM(covariance_type='spherical', n_components=3)
lengths = [10, 10, 10, 10, 10, 10, ...]
X = array([[-180.22405794, 80.14919183, 259.7276458 ],
[-179.90124348, 79.8556723 , 259.97708883],
[-2...95007275],
[-139.97005487, -299.93792764, -140.04085163],
[-239.95188158, 320.03972951, -119.97272471]])
_state_sequence = array([0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 2, 2, 1, 0, 0, 2, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 2, 2, 1, 1, 1,...2,
2, 1, 1, 1, 1, 1, 1, 1, 1, 1, 2, 1, 1, 1, 1, 1, 0, 0, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 2, 1])
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_fit(self, implementation, params='stmc', n_iter=5, **kwargs):
h = hmm.GaussianHMM(self.n_components, self.covariance_type,
implementation=implementation)
h.startprob_ = self.startprob
h.transmat_ = normalized(
self.transmat + np.diag(self.prng.rand(self.n_components)), 1)
h.means_ = 20 * self.means
h.covars_ = self.covars
lengths = [10] * 10
X, _state_sequence = h.sample(sum(lengths), random_state=self.prng)
# Mess up the parameters and see if we can re-learn them.
# TODO: change the params and uncomment the check
> h.fit(X, lengths=lengths)
hmmlearn/tests/test_gaussian_hmm.py:89:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GaussianHMM(covariance_type='spherical', n_components=3)
X = array([[-180.22405794, 80.14919183, 259.7276458 ],
[-179.90124348, 79.8556723 , 259.97708883],
[-2...02423171],
[-240.11318548, 319.89135278, -120.23468395],
[-240.09991625, 319.74125997, -119.91965919]])
def _fit_log(self, X):
log_frameprob = self._compute_log_likelihood(X)
> log_prob, fwdlattice = _hmmc.forward_log(
self.startprob_, self.transmat_, log_frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:864: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
------------------------------ Captured log call -------------------------------
WARNING hmmlearn.base:base.py:514 Even though the 'startprob_' attribute is set, it will be overwritten during initialization because 'init_params' contains 's'
WARNING hmmlearn.base:base.py:514 Even though the 'transmat_' attribute is set, it will be overwritten during initialization because 'init_params' contains 't'
WARNING hmmlearn.base:base.py:514 Even though the 'means_' attribute is set, it will be overwritten during initialization because 'init_params' contains 'm'
WARNING hmmlearn.base:base.py:514 Even though the 'covars_' attribute is set, it will be overwritten during initialization because 'init_params' contains 'c'
__________ TestGaussianHMMWithSphericalCovars.test_criterion[scaling] __________
self = <hmmlearn.tests.test_gaussian_hmm.TestGaussianHMMWithSphericalCovars object at 0xffff79427d10>
implementation = 'scaling'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_criterion(self, implementation):
random_state = check_random_state(42)
m1 = hmm.GaussianHMM(self.n_components, init_params="",
covariance_type=self.covariance_type)
m1.startprob_ = self.startprob
m1.transmat_ = self.transmat
m1.means_ = self.means * 10
m1.covars_ = self.covars
X, _ = m1.sample(2000, random_state=random_state)
aic = []
bic = []
ns = [2, 3, 4]
for n in ns:
h = hmm.GaussianHMM(n, self.covariance_type, n_iter=500,
random_state=random_state, implementation=implementation)
> h.fit(X)
hmmlearn/tests/test_gaussian_hmm.py:110:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GaussianHMM(covariance_type='spherical', implementation='scaling',
n_components=2, n_iter=500,
random_state=RandomState(MT19937) at 0xFFFF78FA7540)
X = array([[ -90.15718286, 40.04508216, 130.03944716],
[-119.82025674, 159.91649324, -59.90349328],
[-1...84045482],
[-120.12894077, 159.84070667, -60.20323671],
[ -89.97836609, 39.94933366, 129.82682576]])
def _fit_scaling(self, X):
frameprob = self._compute_likelihood(X)
> log_prob, fwdlattice, scaling_factors = _hmmc.forward_scaling(
self.startprob_, self.transmat_, frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:855: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
____________ TestGaussianHMMWithSphericalCovars.test_criterion[log] ____________
self = <hmmlearn.tests.test_gaussian_hmm.TestGaussianHMMWithSphericalCovars object at 0xffff79427e90>
implementation = 'log'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_criterion(self, implementation):
random_state = check_random_state(42)
m1 = hmm.GaussianHMM(self.n_components, init_params="",
covariance_type=self.covariance_type)
m1.startprob_ = self.startprob
m1.transmat_ = self.transmat
m1.means_ = self.means * 10
m1.covars_ = self.covars
X, _ = m1.sample(2000, random_state=random_state)
aic = []
bic = []
ns = [2, 3, 4]
for n in ns:
h = hmm.GaussianHMM(n, self.covariance_type, n_iter=500,
random_state=random_state, implementation=implementation)
> h.fit(X)
hmmlearn/tests/test_gaussian_hmm.py:110:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GaussianHMM(covariance_type='spherical', n_components=2, n_iter=500,
random_state=RandomState(MT19937) at 0xFFFF78F88240)
X = array([[ -90.15718286, 40.04508216, 130.03944716],
[-119.82025674, 159.91649324, -59.90349328],
[-1...84045482],
[-120.12894077, 159.84070667, -60.20323671],
[ -89.97836609, 39.94933366, 129.82682576]])
def _fit_log(self, X):
log_frameprob = self._compute_log_likelihood(X)
> log_prob, fwdlattice = _hmmc.forward_log(
self.startprob_, self.transmat_, log_frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:864: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
___ TestGaussianHMMWithSphericalCovars.test_fit_ignored_init_warns[scaling] ____
self = <hmmlearn.tests.test_gaussian_hmm.TestGaussianHMMWithSphericalCovars object at 0xffff79441460>
implementation = 'scaling'
caplog = <_pytest.logging.LogCaptureFixture object at 0xffff78ff2a50>
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_fit_ignored_init_warns(self, implementation, caplog):
# This test occasionally will be flaky in learning the model.
# What is important here, is that the expected log message is produced
# We can test convergence properties elsewhere.
h = hmm.GaussianHMM(self.n_components, self.covariance_type,
implementation=implementation)
h.startprob_ = self.startprob
> h.fit(self.prng.randn(100, self.n_components))
hmmlearn/tests/test_gaussian_hmm.py:128:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GaussianHMM(covariance_type='spherical', implementation='scaling',
n_components=3)
X = array([[ 4.39992016e-01, -4.28234395e-01, -3.12012681e-01],
[-5.68883385e-01, -1.58494101e+00, 1.05535316e+00]... [-2.20862064e-01, 4.83062914e-01, -1.95718567e+00],
[ 1.00961906e+00, 7.02226595e-01, -9.47509422e-01]])
def _fit_scaling(self, X):
frameprob = self._compute_likelihood(X)
> log_prob, fwdlattice, scaling_factors = _hmmc.forward_scaling(
self.startprob_, self.transmat_, frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:855: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
------------------------------ Captured log call -------------------------------
WARNING hmmlearn.base:base.py:514 Even though the 'startprob_' attribute is set, it will be overwritten during initialization because 'init_params' contains 's'
_____ TestGaussianHMMWithSphericalCovars.test_fit_ignored_init_warns[log] ______
self = <hmmlearn.tests.test_gaussian_hmm.TestGaussianHMMWithSphericalCovars object at 0xffff79443200>
implementation = 'log'
caplog = <_pytest.logging.LogCaptureFixture object at 0xffff78f30d70>
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_fit_ignored_init_warns(self, implementation, caplog):
# This test occasionally will be flaky in learning the model.
# What is important here, is that the expected log message is produced
# We can test convergence properties elsewhere.
h = hmm.GaussianHMM(self.n_components, self.covariance_type,
implementation=implementation)
h.startprob_ = self.startprob
> h.fit(self.prng.randn(100, self.n_components))
hmmlearn/tests/test_gaussian_hmm.py:128:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GaussianHMM(covariance_type='spherical', n_components=3)
X = array([[ 4.39992016e-01, -4.28234395e-01, -3.12012681e-01],
[-5.68883385e-01, -1.58494101e+00, 1.05535316e+00]... [-2.20862064e-01, 4.83062914e-01, -1.95718567e+00],
[ 1.00961906e+00, 7.02226595e-01, -9.47509422e-01]])
def _fit_log(self, X):
log_frameprob = self._compute_log_likelihood(X)
> log_prob, fwdlattice = _hmmc.forward_log(
self.startprob_, self.transmat_, log_frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:864: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
------------------------------ Captured log call -------------------------------
WARNING hmmlearn.base:base.py:514 Even though the 'startprob_' attribute is set, it will be overwritten during initialization because 'init_params' contains 's'
_ TestGaussianHMMWithSphericalCovars.test_fit_sequences_of_different_length[scaling] _
self = <hmmlearn.tests.test_gaussian_hmm.TestGaussianHMMWithSphericalCovars object at 0xffff79427c50>
implementation = 'scaling'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_fit_sequences_of_different_length(self, implementation):
lengths = [3, 4, 5]
X = self.prng.rand(sum(lengths), self.n_features)
h = hmm.GaussianHMM(self.n_components, self.covariance_type,
implementation=implementation)
# This shouldn't raise
# ValueError: setting an array element with a sequence.
> h.fit(X, lengths=lengths)
hmmlearn/tests/test_gaussian_hmm.py:157:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GaussianHMM(covariance_type='spherical', implementation='scaling',
n_components=3)
X = array([[0.18263144, 0.82608225, 0.10540183],
[0.28357668, 0.06556327, 0.05644419],
[0.76545582, 0.01178803, 0.61194334]])
def _fit_scaling(self, X):
frameprob = self._compute_likelihood(X)
> log_prob, fwdlattice, scaling_factors = _hmmc.forward_scaling(
self.startprob_, self.transmat_, frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:855: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
_ TestGaussianHMMWithSphericalCovars.test_fit_sequences_of_different_length[log] _
self = <hmmlearn.tests.test_gaussian_hmm.TestGaussianHMMWithSphericalCovars object at 0xffff79427110>
implementation = 'log'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_fit_sequences_of_different_length(self, implementation):
lengths = [3, 4, 5]
X = self.prng.rand(sum(lengths), self.n_features)
h = hmm.GaussianHMM(self.n_components, self.covariance_type,
implementation=implementation)
# This shouldn't raise
# ValueError: setting an array element with a sequence.
> h.fit(X, lengths=lengths)
hmmlearn/tests/test_gaussian_hmm.py:157:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GaussianHMM(covariance_type='spherical', n_components=3)
X = array([[0.18263144, 0.82608225, 0.10540183],
[0.28357668, 0.06556327, 0.05644419],
[0.76545582, 0.01178803, 0.61194334]])
def _fit_log(self, X):
log_frameprob = self._compute_log_likelihood(X)
> log_prob, fwdlattice = _hmmc.forward_log(
self.startprob_, self.transmat_, log_frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:864: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
_ TestGaussianHMMWithSphericalCovars.test_fit_with_length_one_signal[scaling] __
self = <hmmlearn.tests.test_gaussian_hmm.TestGaussianHMMWithSphericalCovars object at 0xffff793e8b30>
implementation = 'scaling'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_fit_with_length_one_signal(self, implementation):
lengths = [10, 8, 1]
X = self.prng.rand(sum(lengths), self.n_features)
h = hmm.GaussianHMM(self.n_components, self.covariance_type,
implementation=implementation)
# This shouldn't raise
# ValueError: zero-size array to reduction operation maximum which
# has no identity
> h.fit(X, lengths=lengths)
hmmlearn/tests/test_gaussian_hmm.py:169:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GaussianHMM(covariance_type='spherical', implementation='scaling',
n_components=3)
X = array([[0.18263144, 0.82608225, 0.10540183],
[0.28357668, 0.06556327, 0.05644419],
[0.76545582, 0.011788...06, 0.59758229, 0.87239246],
[0.98302087, 0.46740328, 0.87574449],
[0.2960687 , 0.13129105, 0.84281793]])
def _fit_scaling(self, X):
frameprob = self._compute_likelihood(X)
> log_prob, fwdlattice, scaling_factors = _hmmc.forward_scaling(
self.startprob_, self.transmat_, frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:855: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
___ TestGaussianHMMWithSphericalCovars.test_fit_with_length_one_signal[log] ____
self = <hmmlearn.tests.test_gaussian_hmm.TestGaussianHMMWithSphericalCovars object at 0xffff794404a0>
implementation = 'log'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_fit_with_length_one_signal(self, implementation):
lengths = [10, 8, 1]
X = self.prng.rand(sum(lengths), self.n_features)
h = hmm.GaussianHMM(self.n_components, self.covariance_type,
implementation=implementation)
# This shouldn't raise
# ValueError: zero-size array to reduction operation maximum which
# has no identity
> h.fit(X, lengths=lengths)
hmmlearn/tests/test_gaussian_hmm.py:169:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GaussianHMM(covariance_type='spherical', n_components=3)
X = array([[0.18263144, 0.82608225, 0.10540183],
[0.28357668, 0.06556327, 0.05644419],
[0.76545582, 0.011788...06, 0.59758229, 0.87239246],
[0.98302087, 0.46740328, 0.87574449],
[0.2960687 , 0.13129105, 0.84281793]])
def _fit_log(self, X):
log_frameprob = self._compute_log_likelihood(X)
> log_prob, fwdlattice = _hmmc.forward_log(
self.startprob_, self.transmat_, log_frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:864: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
______ TestGaussianHMMWithSphericalCovars.test_fit_zero_variance[scaling] ______
self = <hmmlearn.tests.test_gaussian_hmm.TestGaussianHMMWithSphericalCovars object at 0xffff79440680>
implementation = 'scaling'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_fit_zero_variance(self, implementation):
# Example from issue #2 on GitHub.
X = np.asarray([
[7.15000000e+02, 5.85000000e+02, 0.00000000e+00, 0.00000000e+00],
[7.15000000e+02, 5.20000000e+02, 1.04705811e+00, -6.03696289e+01],
[7.15000000e+02, 4.55000000e+02, 7.20886230e-01, -5.27055664e+01],
[7.15000000e+02, 3.90000000e+02, -4.57946777e-01, -7.80605469e+01],
[7.15000000e+02, 3.25000000e+02, -6.43127441e+00, -5.59954834e+01],
[7.15000000e+02, 2.60000000e+02, -2.90063477e+00, -7.80220947e+01],
[7.15000000e+02, 1.95000000e+02, 8.45532227e+00, -7.03294373e+01],
[7.15000000e+02, 1.30000000e+02, 4.09387207e+00, -5.83621216e+01],
[7.15000000e+02, 6.50000000e+01, -1.21667480e+00, -4.48131409e+01]
])
h = hmm.GaussianHMM(3, self.covariance_type,
implementation=implementation)
> h.fit(X)
hmmlearn/tests/test_gaussian_hmm.py:187:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GaussianHMM(covariance_type='spherical', implementation='scaling',
n_components=3)
X = array([[ 7.15000000e+02, 5.85000000e+02, 0.00000000e+00,
0.00000000e+00],
[ 7.15000000e+02, 5.20000...07e+00,
-5.83621216e+01],
[ 7.15000000e+02, 6.50000000e+01, -1.21667480e+00,
-4.48131409e+01]])
def _fit_scaling(self, X):
frameprob = self._compute_likelihood(X)
> log_prob, fwdlattice, scaling_factors = _hmmc.forward_scaling(
self.startprob_, self.transmat_, frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:855: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
________ TestGaussianHMMWithSphericalCovars.test_fit_zero_variance[log] ________
self = <hmmlearn.tests.test_gaussian_hmm.TestGaussianHMMWithSphericalCovars object at 0xffff79440830>
implementation = 'log'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_fit_zero_variance(self, implementation):
# Example from issue #2 on GitHub.
X = np.asarray([
[7.15000000e+02, 5.85000000e+02, 0.00000000e+00, 0.00000000e+00],
[7.15000000e+02, 5.20000000e+02, 1.04705811e+00, -6.03696289e+01],
[7.15000000e+02, 4.55000000e+02, 7.20886230e-01, -5.27055664e+01],
[7.15000000e+02, 3.90000000e+02, -4.57946777e-01, -7.80605469e+01],
[7.15000000e+02, 3.25000000e+02, -6.43127441e+00, -5.59954834e+01],
[7.15000000e+02, 2.60000000e+02, -2.90063477e+00, -7.80220947e+01],
[7.15000000e+02, 1.95000000e+02, 8.45532227e+00, -7.03294373e+01],
[7.15000000e+02, 1.30000000e+02, 4.09387207e+00, -5.83621216e+01],
[7.15000000e+02, 6.50000000e+01, -1.21667480e+00, -4.48131409e+01]
])
h = hmm.GaussianHMM(3, self.covariance_type,
implementation=implementation)
> h.fit(X)
hmmlearn/tests/test_gaussian_hmm.py:187:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GaussianHMM(covariance_type='spherical', n_components=3)
X = array([[ 7.15000000e+02, 5.85000000e+02, 0.00000000e+00,
0.00000000e+00],
[ 7.15000000e+02, 5.20000...07e+00,
-5.83621216e+01],
[ 7.15000000e+02, 6.50000000e+01, -1.21667480e+00,
-4.48131409e+01]])
def _fit_log(self, X):
log_frameprob = self._compute_log_likelihood(X)
> log_prob, fwdlattice = _hmmc.forward_log(
self.startprob_, self.transmat_, log_frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:864: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
_______ TestGaussianHMMWithSphericalCovars.test_fit_with_priors[scaling] _______
self = <hmmlearn.tests.test_gaussian_hmm.TestGaussianHMMWithSphericalCovars object at 0xffff79440a10>
implementation = 'scaling', init_params = 'mc', params = 'stmc', n_iter = 20
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_fit_with_priors(self, implementation, init_params='mc',
params='stmc', n_iter=20):
# We have a few options to make this a robust test, such as
# a. increase the amount of training data to ensure convergence
# b. Only learn some of the parameters (simplify the problem)
# c. Increase the number of iterations
#
# (c) seems to not affect the ci/cd time too much.
startprob_prior = 10 * self.startprob + 2.0
transmat_prior = 10 * self.transmat + 2.0
means_prior = self.means
means_weight = 2.0
covars_weight = 2.0
if self.covariance_type in ('full', 'tied'):
covars_weight += self.n_features
covars_prior = self.covars
h = hmm.GaussianHMM(self.n_components, self.covariance_type,
implementation=implementation)
h.startprob_ = self.startprob
h.startprob_prior = startprob_prior
h.transmat_ = normalized(
self.transmat + np.diag(self.prng.rand(self.n_components)), 1)
h.transmat_prior = transmat_prior
h.means_ = 20 * self.means
h.means_prior = means_prior
h.means_weight = means_weight
h.covars_ = self.covars
h.covars_prior = covars_prior
h.covars_weight = covars_weight
lengths = [200] * 10
X, _state_sequence = h.sample(sum(lengths), random_state=self.prng)
# Re-initialize the parameters and check that we can converge to
# the original parameter values.
h_learn = hmm.GaussianHMM(self.n_components, self.covariance_type,
init_params=init_params, params=params,
implementation=implementation,)
# don't use random parameters for testing
init = 1. / h_learn.n_components
h_learn.startprob_ = np.full(h_learn.n_components, init)
h_learn.transmat_ = \
np.full((h_learn.n_components, h_learn.n_components), init)
h_learn.n_iter = 0
h_learn.fit(X, lengths=lengths)
> assert_log_likelihood_increasing(h_learn, X, lengths, n_iter)
hmmlearn/tests/test_gaussian_hmm.py:237:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/tests/__init__.py:46: in assert_log_likelihood_increasing
h.fit(X, lengths=lengths)
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GaussianHMM(covariance_type='spherical', implementation='scaling',
init_params='', n_components=3, n_iter=1)
X = array([[-180.22405794, 80.14919183, 259.7276458 ],
[-179.90124348, 79.8556723 , 259.97708883],
[-2...73790553],
[-180.18615346, 79.87077255, 259.73353861],
[-240.06028298, 320.09425446, -119.74998577]])
def _fit_scaling(self, X):
frameprob = self._compute_likelihood(X)
> log_prob, fwdlattice, scaling_factors = _hmmc.forward_scaling(
self.startprob_, self.transmat_, frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:855: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
_________ TestGaussianHMMWithSphericalCovars.test_fit_with_priors[log] _________
self = <hmmlearn.tests.test_gaussian_hmm.TestGaussianHMMWithSphericalCovars object at 0xffff79440b90>
implementation = 'log', init_params = 'mc', params = 'stmc', n_iter = 20
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_fit_with_priors(self, implementation, init_params='mc',
params='stmc', n_iter=20):
# We have a few options to make this a robust test, such as
# a. increase the amount of training data to ensure convergence
# b. Only learn some of the parameters (simplify the problem)
# c. Increase the number of iterations
#
# (c) seems to not affect the ci/cd time too much.
startprob_prior = 10 * self.startprob + 2.0
transmat_prior = 10 * self.transmat + 2.0
means_prior = self.means
means_weight = 2.0
covars_weight = 2.0
if self.covariance_type in ('full', 'tied'):
covars_weight += self.n_features
covars_prior = self.covars
h = hmm.GaussianHMM(self.n_components, self.covariance_type,
implementation=implementation)
h.startprob_ = self.startprob
h.startprob_prior = startprob_prior
h.transmat_ = normalized(
self.transmat + np.diag(self.prng.rand(self.n_components)), 1)
h.transmat_prior = transmat_prior
h.means_ = 20 * self.means
h.means_prior = means_prior
h.means_weight = means_weight
h.covars_ = self.covars
h.covars_prior = covars_prior
h.covars_weight = covars_weight
lengths = [200] * 10
X, _state_sequence = h.sample(sum(lengths), random_state=self.prng)
# Re-initialize the parameters and check that we can converge to
# the original parameter values.
h_learn = hmm.GaussianHMM(self.n_components, self.covariance_type,
init_params=init_params, params=params,
implementation=implementation,)
# don't use random parameters for testing
init = 1. / h_learn.n_components
h_learn.startprob_ = np.full(h_learn.n_components, init)
h_learn.transmat_ = \
np.full((h_learn.n_components, h_learn.n_components), init)
h_learn.n_iter = 0
h_learn.fit(X, lengths=lengths)
> assert_log_likelihood_increasing(h_learn, X, lengths, n_iter)
hmmlearn/tests/test_gaussian_hmm.py:237:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/tests/__init__.py:46: in assert_log_likelihood_increasing
h.fit(X, lengths=lengths)
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GaussianHMM(covariance_type='spherical', init_params='', n_components=3,
n_iter=1)
X = array([[-180.22405794, 80.14919183, 259.7276458 ],
[-179.90124348, 79.8556723 , 259.97708883],
[-2...73790553],
[-180.18615346, 79.87077255, 259.73353861],
[-240.06028298, 320.09425446, -119.74998577]])
def _fit_log(self, X):
log_frameprob = self._compute_log_likelihood(X)
> log_prob, fwdlattice = _hmmc.forward_log(
self.startprob_, self.transmat_, log_frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:864: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
_ TestGaussianHMMWithSphericalCovars.test_fit_startprob_and_transmat[scaling] __
self = <hmmlearn.tests.test_gaussian_hmm.TestGaussianHMMWithSphericalCovars object at 0xffff794258e0>
implementation = 'scaling'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_fit_startprob_and_transmat(self, implementation):
> self.test_fit(implementation, 'st')
hmmlearn/tests/test_gaussian_hmm.py:274:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/tests/test_gaussian_hmm.py:89: in test_fit
h.fit(X, lengths=lengths)
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GaussianHMM(covariance_type='spherical', implementation='scaling',
n_components=3)
X = array([[-180.22405794, 80.14919183, 259.7276458 ],
[-179.90124348, 79.8556723 , 259.97708883],
[-2...02423171],
[-240.11318548, 319.89135278, -120.23468395],
[-240.09991625, 319.74125997, -119.91965919]])
def _fit_scaling(self, X):
frameprob = self._compute_likelihood(X)
> log_prob, fwdlattice, scaling_factors = _hmmc.forward_scaling(
self.startprob_, self.transmat_, frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:855: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
------------------------------ Captured log call -------------------------------
WARNING hmmlearn.base:base.py:514 Even though the 'startprob_' attribute is set, it will be overwritten during initialization because 'init_params' contains 's'
WARNING hmmlearn.base:base.py:514 Even though the 'transmat_' attribute is set, it will be overwritten during initialization because 'init_params' contains 't'
WARNING hmmlearn.base:base.py:514 Even though the 'means_' attribute is set, it will be overwritten during initialization because 'init_params' contains 'm'
WARNING hmmlearn.base:base.py:514 Even though the 'covars_' attribute is set, it will be overwritten during initialization because 'init_params' contains 'c'
___ TestGaussianHMMWithSphericalCovars.test_fit_startprob_and_transmat[log] ____
self = <hmmlearn.tests.test_gaussian_hmm.TestGaussianHMMWithSphericalCovars object at 0xffff794255e0>
implementation = 'log'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_fit_startprob_and_transmat(self, implementation):
> self.test_fit(implementation, 'st')
hmmlearn/tests/test_gaussian_hmm.py:274:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/tests/test_gaussian_hmm.py:89: in test_fit
h.fit(X, lengths=lengths)
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GaussianHMM(covariance_type='spherical', n_components=3)
X = array([[-180.22405794, 80.14919183, 259.7276458 ],
[-179.90124348, 79.8556723 , 259.97708883],
[-2...02423171],
[-240.11318548, 319.89135278, -120.23468395],
[-240.09991625, 319.74125997, -119.91965919]])
def _fit_log(self, X):
log_frameprob = self._compute_log_likelihood(X)
> log_prob, fwdlattice = _hmmc.forward_log(
self.startprob_, self.transmat_, log_frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:864: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
------------------------------ Captured log call -------------------------------
WARNING hmmlearn.base:base.py:514 Even though the 'startprob_' attribute is set, it will be overwritten during initialization because 'init_params' contains 's'
WARNING hmmlearn.base:base.py:514 Even though the 'transmat_' attribute is set, it will be overwritten during initialization because 'init_params' contains 't'
WARNING hmmlearn.base:base.py:514 Even though the 'means_' attribute is set, it will be overwritten during initialization because 'init_params' contains 'm'
WARNING hmmlearn.base:base.py:514 Even though the 'covars_' attribute is set, it will be overwritten during initialization because 'init_params' contains 'c'
_____ TestGaussianHMMWithSphericalCovars.test_underflow_from_scaling[log] ______
self = <hmmlearn.tests.test_gaussian_hmm.TestGaussianHMMWithSphericalCovars object at 0xffff79426c90>
implementation = 'log'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_underflow_from_scaling(self, implementation):
# Setup an ill-conditioned dataset
data1 = self.prng.normal(0, 1, 100).tolist()
data2 = self.prng.normal(5, 1, 100).tolist()
data3 = self.prng.normal(0, 1, 100).tolist()
data4 = self.prng.normal(5, 1, 100).tolist()
data = np.concatenate([data1, data2, data3, data4])
# Insert an outlier
data[40] = 10000
data2d = data[:, None]
lengths = [len(data2d)]
h = hmm.GaussianHMM(2, n_iter=100, verbose=True,
covariance_type=self.covariance_type,
implementation=implementation, init_params="")
h.startprob_ = [0.0, 1]
h.transmat_ = [[0.4, 0.6], [0.6, 0.4]]
h.means_ = [[0], [5]]
h.covars_ = [[1], [1]]
if implementation == "scaling":
with pytest.raises(ValueError):
h.fit(data2d, lengths)
else:
> h.fit(data2d, lengths)
hmmlearn/tests/test_gaussian_hmm.py:300:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GaussianHMM(covariance_type='spherical', init_params='', n_components=2,
n_iter=100, verbose=True)
X = array([[ 4.39992016e-01],
[-4.28234395e-01],
[-3.12012681e-01],
[-5.68883385e-01],
[-1.584...83917623e+00],
[ 5.48982119e+00],
[ 7.23344018e+00],
[ 4.20497381e+00],
[ 4.96426274e+00]])
def _fit_log(self, X):
log_frameprob = self._compute_log_likelihood(X)
> log_prob, fwdlattice = _hmmc.forward_log(
self.startprob_, self.transmat_, log_frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:864: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
___ TestGaussianHMMWithDiagonalCovars.test_score_samples_and_decode[scaling] ___
self = <hmmlearn.tests.test_gaussian_hmm.TestGaussianHMMWithDiagonalCovars object at 0xffff79441880>
implementation = 'scaling'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_score_samples_and_decode(self, implementation):
h = hmm.GaussianHMM(self.n_components, self.covariance_type,
init_params="st", implementation=implementation)
h.means_ = self.means
h.covars_ = self.covars
# Make sure the means are far apart so posteriors.argmax()
# picks the actual component used to generate the observations.
h.means_ = 20 * h.means_
gaussidx = np.repeat(np.arange(self.n_components), 5)
n_samples = len(gaussidx)
X = (self.prng.randn(n_samples, self.n_features)
+ h.means_[gaussidx])
h._init(X, [n_samples])
> ll, posteriors = h.score_samples(X)
hmmlearn/tests/test_gaussian_hmm.py:52:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:201: in score_samples
return self._score(X, lengths, compute_posteriors=True)
hmmlearn/base.py:244: in _score
return impl(
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GaussianHMM(implementation='scaling', init_params='st', n_components=3)
X = array([[-181.58494101, 81.05535316, 258.07342089],
[-179.30141612, 79.25379857, 259.84337334],
[-1...79461543],
[-140.95336068, -299.67848205, -141.52093867],
[-142.16145292, -299.65468671, -139.12103062]])
lengths = None
def _score_scaling(self, X, lengths=None, *, compute_posteriors):
log_prob = 0
sub_posteriors = [np.empty((0, self.n_components))]
for sub_X in _utils.split_X_lengths(X, lengths):
frameprob = self._compute_likelihood(sub_X)
> log_probij, fwdlattice, scaling_factors = _hmmc.forward_scaling(
self.startprob_, self.transmat_, frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:272: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
_____ TestGaussianHMMWithDiagonalCovars.test_score_samples_and_decode[log] _____
self = <hmmlearn.tests.test_gaussian_hmm.TestGaussianHMMWithDiagonalCovars object at 0xffff79441a00>
implementation = 'log'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_score_samples_and_decode(self, implementation):
h = hmm.GaussianHMM(self.n_components, self.covariance_type,
init_params="st", implementation=implementation)
h.means_ = self.means
h.covars_ = self.covars
# Make sure the means are far apart so posteriors.argmax()
# picks the actual component used to generate the observations.
h.means_ = 20 * h.means_
gaussidx = np.repeat(np.arange(self.n_components), 5)
n_samples = len(gaussidx)
X = (self.prng.randn(n_samples, self.n_features)
+ h.means_[gaussidx])
h._init(X, [n_samples])
> ll, posteriors = h.score_samples(X)
hmmlearn/tests/test_gaussian_hmm.py:52:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:201: in score_samples
return self._score(X, lengths, compute_posteriors=True)
hmmlearn/base.py:244: in _score
return impl(
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GaussianHMM(init_params='st', n_components=3)
X = array([[-181.58494101, 81.05535316, 258.07342089],
[-179.30141612, 79.25379857, 259.84337334],
[-1...79461543],
[-140.95336068, -299.67848205, -141.52093867],
[-142.16145292, -299.65468671, -139.12103062]])
lengths = None
def _score_log(self, X, lengths=None, *, compute_posteriors):
"""
Compute the log probability under the model, as well as posteriors if
*compute_posteriors* is True (otherwise, an empty array is returned
for the latter).
"""
log_prob = 0
sub_posteriors = [np.empty((0, self.n_components))]
for sub_X in _utils.split_X_lengths(X, lengths):
log_frameprob = self._compute_log_likelihood(sub_X)
> log_probij, fwdlattice = _hmmc.forward_log(
self.startprob_, self.transmat_, log_frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:257: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
_____________ TestGaussianHMMWithDiagonalCovars.test_fit[scaling] ______________
self = <hmmlearn.tests.test_gaussian_hmm.TestGaussianHMMWithDiagonalCovars object at 0xffff79441fd0>
implementation = 'scaling', params = 'stmc', n_iter = 5, kwargs = {}
h = GaussianHMM(implementation='scaling', n_components=3)
lengths = [10, 10, 10, 10, 10, 10, ...]
X = array([[-180.10548806, 79.65731382, 260.1106488 ],
[-240.1207402 , 319.97139868, -120.01791514],
[-2...02728733],
[-239.95365322, 320.03452379, -120.02851028],
[-179.97832262, 80.04811842, 260.03537787]])
_state_sequence = array([0, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 2, 2, 1, 0, 0, 2, 1, 1,
1, 0, 0, 0, 0, 0, 0, 0, 2, 2, 2, 2, 2,...2,
2, 1, 1, 1, 0, 0, 0, 1, 1, 2, 2, 1, 1, 1, 2, 1, 0, 0, 0, 1, 1, 0,
1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 1, 0])
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_fit(self, implementation, params='stmc', n_iter=5, **kwargs):
h = hmm.GaussianHMM(self.n_components, self.covariance_type,
implementation=implementation)
h.startprob_ = self.startprob
h.transmat_ = normalized(
self.transmat + np.diag(self.prng.rand(self.n_components)), 1)
h.means_ = 20 * self.means
h.covars_ = self.covars
lengths = [10] * 10
X, _state_sequence = h.sample(sum(lengths), random_state=self.prng)
# Mess up the parameters and see if we can re-learn them.
# TODO: change the params and uncomment the check
> h.fit(X, lengths=lengths)
hmmlearn/tests/test_gaussian_hmm.py:89:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GaussianHMM(implementation='scaling', n_components=3)
X = array([[-180.10548806, 79.65731382, 260.1106488 ],
[-240.1207402 , 319.97139868, -120.01791514],
[-2...98639428],
[-180.18651806, 79.88681452, 259.90325311],
[-179.93614812, 79.90008375, 259.76960024]])
def _fit_scaling(self, X):
frameprob = self._compute_likelihood(X)
> log_prob, fwdlattice, scaling_factors = _hmmc.forward_scaling(
self.startprob_, self.transmat_, frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:855: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
------------------------------ Captured log call -------------------------------
WARNING hmmlearn.base:base.py:514 Even though the 'startprob_' attribute is set, it will be overwritten during initialization because 'init_params' contains 's'
WARNING hmmlearn.base:base.py:514 Even though the 'transmat_' attribute is set, it will be overwritten during initialization because 'init_params' contains 't'
WARNING hmmlearn.base:base.py:514 Even though the 'means_' attribute is set, it will be overwritten during initialization because 'init_params' contains 'm'
WARNING hmmlearn.base:base.py:514 Even though the 'covars_' attribute is set, it will be overwritten during initialization because 'init_params' contains 'c'
_______________ TestGaussianHMMWithDiagonalCovars.test_fit[log] ________________
self = <hmmlearn.tests.test_gaussian_hmm.TestGaussianHMMWithDiagonalCovars object at 0xffff79442150>
implementation = 'log', params = 'stmc', n_iter = 5, kwargs = {}
h = GaussianHMM(n_components=3), lengths = [10, 10, 10, 10, 10, 10, ...]
X = array([[-180.10548806, 79.65731382, 260.1106488 ],
[-240.1207402 , 319.97139868, -120.01791514],
[-2...02728733],
[-239.95365322, 320.03452379, -120.02851028],
[-179.97832262, 80.04811842, 260.03537787]])
_state_sequence = array([0, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 2, 2, 1, 0, 0, 2, 1, 1,
1, 0, 0, 0, 0, 0, 0, 0, 2, 2, 2, 2, 2,...2,
2, 1, 1, 1, 0, 0, 0, 1, 1, 2, 2, 1, 1, 1, 2, 1, 0, 0, 0, 1, 1, 0,
1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 1, 0])
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_fit(self, implementation, params='stmc', n_iter=5, **kwargs):
h = hmm.GaussianHMM(self.n_components, self.covariance_type,
implementation=implementation)
h.startprob_ = self.startprob
h.transmat_ = normalized(
self.transmat + np.diag(self.prng.rand(self.n_components)), 1)
h.means_ = 20 * self.means
h.covars_ = self.covars
lengths = [10] * 10
X, _state_sequence = h.sample(sum(lengths), random_state=self.prng)
# Mess up the parameters and see if we can re-learn them.
# TODO: change the params and uncomment the check
> h.fit(X, lengths=lengths)
hmmlearn/tests/test_gaussian_hmm.py:89:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GaussianHMM(n_components=3)
X = array([[-180.10548806, 79.65731382, 260.1106488 ],
[-240.1207402 , 319.97139868, -120.01791514],
[-2...98639428],
[-180.18651806, 79.88681452, 259.90325311],
[-179.93614812, 79.90008375, 259.76960024]])
def _fit_log(self, X):
log_frameprob = self._compute_log_likelihood(X)
> log_prob, fwdlattice = _hmmc.forward_log(
self.startprob_, self.transmat_, log_frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:864: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
------------------------------ Captured log call -------------------------------
WARNING hmmlearn.base:base.py:514 Even though the 'startprob_' attribute is set, it will be overwritten during initialization because 'init_params' contains 's'
WARNING hmmlearn.base:base.py:514 Even though the 'transmat_' attribute is set, it will be overwritten during initialization because 'init_params' contains 't'
WARNING hmmlearn.base:base.py:514 Even though the 'means_' attribute is set, it will be overwritten during initialization because 'init_params' contains 'm'
WARNING hmmlearn.base:base.py:514 Even though the 'covars_' attribute is set, it will be overwritten during initialization because 'init_params' contains 'c'
__________ TestGaussianHMMWithDiagonalCovars.test_criterion[scaling] ___________
self = <hmmlearn.tests.test_gaussian_hmm.TestGaussianHMMWithDiagonalCovars object at 0xffff79442330>
implementation = 'scaling'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_criterion(self, implementation):
random_state = check_random_state(42)
m1 = hmm.GaussianHMM(self.n_components, init_params="",
covariance_type=self.covariance_type)
m1.startprob_ = self.startprob
m1.transmat_ = self.transmat
m1.means_ = self.means * 10
m1.covars_ = self.covars
X, _ = m1.sample(2000, random_state=random_state)
aic = []
bic = []
ns = [2, 3, 4]
for n in ns:
h = hmm.GaussianHMM(n, self.covariance_type, n_iter=500,
random_state=random_state, implementation=implementation)
> h.fit(X)
hmmlearn/tests/test_gaussian_hmm.py:110:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GaussianHMM(implementation='scaling', n_components=2, n_iter=500,
random_state=RandomState(MT19937) at 0xFFFF790B9640)
X = array([[ -89.96055284, 39.80222668, 130.05051096],
[-120.05552152, 160.1845284 , -59.94002531],
[-1...75723798],
[-120.10591007, 159.86762656, -60.12630269],
[ -90.17317424, 40.02722058, 129.94323241]])
def _fit_scaling(self, X):
frameprob = self._compute_likelihood(X)
> log_prob, fwdlattice, scaling_factors = _hmmc.forward_scaling(
self.startprob_, self.transmat_, frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:855: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
____________ TestGaussianHMMWithDiagonalCovars.test_criterion[log] _____________
self = <hmmlearn.tests.test_gaussian_hmm.TestGaussianHMMWithDiagonalCovars object at 0xffff794424b0>
implementation = 'log'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_criterion(self, implementation):
random_state = check_random_state(42)
m1 = hmm.GaussianHMM(self.n_components, init_params="",
covariance_type=self.covariance_type)
m1.startprob_ = self.startprob
m1.transmat_ = self.transmat
m1.means_ = self.means * 10
m1.covars_ = self.covars
X, _ = m1.sample(2000, random_state=random_state)
aic = []
bic = []
ns = [2, 3, 4]
for n in ns:
h = hmm.GaussianHMM(n, self.covariance_type, n_iter=500,
random_state=random_state, implementation=implementation)
> h.fit(X)
hmmlearn/tests/test_gaussian_hmm.py:110:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GaussianHMM(n_components=2, n_iter=500,
random_state=RandomState(MT19937) at 0xFFFF78FA5E40)
X = array([[ -89.96055284, 39.80222668, 130.05051096],
[-120.05552152, 160.1845284 , -59.94002531],
[-1...75723798],
[-120.10591007, 159.86762656, -60.12630269],
[ -90.17317424, 40.02722058, 129.94323241]])
def _fit_log(self, X):
log_frameprob = self._compute_log_likelihood(X)
> log_prob, fwdlattice = _hmmc.forward_log(
self.startprob_, self.transmat_, log_frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:864: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
____ TestGaussianHMMWithDiagonalCovars.test_fit_ignored_init_warns[scaling] ____
self = <hmmlearn.tests.test_gaussian_hmm.TestGaussianHMMWithDiagonalCovars object at 0xffff79442720>
implementation = 'scaling'
caplog = <_pytest.logging.LogCaptureFixture object at 0xffff79082840>
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_fit_ignored_init_warns(self, implementation, caplog):
# This test occasionally will be flaky in learning the model.
# What is important here, is that the expected log message is produced
# We can test convergence properties elsewhere.
h = hmm.GaussianHMM(self.n_components, self.covariance_type,
implementation=implementation)
h.startprob_ = self.startprob
> h.fit(self.prng.randn(100, self.n_components))
hmmlearn/tests/test_gaussian_hmm.py:128:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GaussianHMM(implementation='scaling', n_components=3)
X = array([[-1.58494101e+00, 1.05535316e+00, -1.92657911e+00],
[ 6.98583878e-01, -7.46201430e-01, -1.56626664e-01]... [ 7.02226595e-01, -9.47509422e-01, -1.16620867e+00],
[ 4.79956068e-01, 3.68105791e-01, 2.45414301e-01]])
def _fit_scaling(self, X):
frameprob = self._compute_likelihood(X)
> log_prob, fwdlattice, scaling_factors = _hmmc.forward_scaling(
self.startprob_, self.transmat_, frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:855: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
------------------------------ Captured log call -------------------------------
WARNING hmmlearn.base:base.py:514 Even though the 'startprob_' attribute is set, it will be overwritten during initialization because 'init_params' contains 's'
______ TestGaussianHMMWithDiagonalCovars.test_fit_ignored_init_warns[log] ______
self = <hmmlearn.tests.test_gaussian_hmm.TestGaussianHMMWithDiagonalCovars object at 0xffff794428a0>
implementation = 'log'
caplog = <_pytest.logging.LogCaptureFixture object at 0xffff790834d0>
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_fit_ignored_init_warns(self, implementation, caplog):
# This test occasionally will be flaky in learning the model.
# What is important here, is that the expected log message is produced
# We can test convergence properties elsewhere.
h = hmm.GaussianHMM(self.n_components, self.covariance_type,
implementation=implementation)
h.startprob_ = self.startprob
> h.fit(self.prng.randn(100, self.n_components))
hmmlearn/tests/test_gaussian_hmm.py:128:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GaussianHMM(n_components=3)
X = array([[-1.58494101e+00, 1.05535316e+00, -1.92657911e+00],
[ 6.98583878e-01, -7.46201430e-01, -1.56626664e-01]... [ 7.02226595e-01, -9.47509422e-01, -1.16620867e+00],
[ 4.79956068e-01, 3.68105791e-01, 2.45414301e-01]])
def _fit_log(self, X):
log_frameprob = self._compute_log_likelihood(X)
> log_prob, fwdlattice = _hmmc.forward_log(
self.startprob_, self.transmat_, log_frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:864: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
------------------------------ Captured log call -------------------------------
WARNING hmmlearn.base:base.py:514 Even though the 'startprob_' attribute is set, it will be overwritten during initialization because 'init_params' contains 's'
_ TestGaussianHMMWithDiagonalCovars.test_fit_sequences_of_different_length[scaling] _
self = <hmmlearn.tests.test_gaussian_hmm.TestGaussianHMMWithDiagonalCovars object at 0xffff794264e0>
implementation = 'scaling'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_fit_sequences_of_different_length(self, implementation):
lengths = [3, 4, 5]
X = self.prng.rand(sum(lengths), self.n_features)
h = hmm.GaussianHMM(self.n_components, self.covariance_type,
implementation=implementation)
# This shouldn't raise
# ValueError: setting an array element with a sequence.
> h.fit(X, lengths=lengths)
hmmlearn/tests/test_gaussian_hmm.py:157:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GaussianHMM(implementation='scaling', n_components=3)
X = array([[0.76545582, 0.01178803, 0.61194334],
[0.33188226, 0.55964837, 0.33549965],
[0.41118255, 0.0768555 , 0.85304299]])
def _fit_scaling(self, X):
frameprob = self._compute_likelihood(X)
> log_prob, fwdlattice, scaling_factors = _hmmc.forward_scaling(
self.startprob_, self.transmat_, frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:855: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
_ TestGaussianHMMWithDiagonalCovars.test_fit_sequences_of_different_length[log] _
self = <hmmlearn.tests.test_gaussian_hmm.TestGaussianHMMWithDiagonalCovars object at 0xffff79426690>
implementation = 'log'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_fit_sequences_of_different_length(self, implementation):
lengths = [3, 4, 5]
X = self.prng.rand(sum(lengths), self.n_features)
h = hmm.GaussianHMM(self.n_components, self.covariance_type,
implementation=implementation)
# This shouldn't raise
# ValueError: setting an array element with a sequence.
> h.fit(X, lengths=lengths)
hmmlearn/tests/test_gaussian_hmm.py:157:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GaussianHMM(n_components=3)
X = array([[0.76545582, 0.01178803, 0.61194334],
[0.33188226, 0.55964837, 0.33549965],
[0.41118255, 0.0768555 , 0.85304299]])
def _fit_log(self, X):
log_frameprob = self._compute_log_likelihood(X)
> log_prob, fwdlattice = _hmmc.forward_log(
self.startprob_, self.transmat_, log_frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:864: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
__ TestGaussianHMMWithDiagonalCovars.test_fit_with_length_one_signal[scaling] __
self = <hmmlearn.tests.test_gaussian_hmm.TestGaussianHMMWithDiagonalCovars object at 0xffff79424b30>
implementation = 'scaling'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_fit_with_length_one_signal(self, implementation):
lengths = [10, 8, 1]
X = self.prng.rand(sum(lengths), self.n_features)
h = hmm.GaussianHMM(self.n_components, self.covariance_type,
implementation=implementation)
# This shouldn't raise
# ValueError: zero-size array to reduction operation maximum which
# has no identity
> h.fit(X, lengths=lengths)
hmmlearn/tests/test_gaussian_hmm.py:169:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GaussianHMM(implementation='scaling', n_components=3)
X = array([[0.76545582, 0.01178803, 0.61194334],
[0.33188226, 0.55964837, 0.33549965],
[0.41118255, 0.076855...7 , 0.13129105, 0.84281793],
[0.6590363 , 0.5954396 , 0.4363537 ],
[0.35625033, 0.58713093, 0.14947134]])
def _fit_scaling(self, X):
frameprob = self._compute_likelihood(X)
> log_prob, fwdlattice, scaling_factors = _hmmc.forward_scaling(
self.startprob_, self.transmat_, frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:855: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
____ TestGaussianHMMWithDiagonalCovars.test_fit_with_length_one_signal[log] ____
self = <hmmlearn.tests.test_gaussian_hmm.TestGaussianHMMWithDiagonalCovars object at 0xffff794425d0>
implementation = 'log'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_fit_with_length_one_signal(self, implementation):
lengths = [10, 8, 1]
X = self.prng.rand(sum(lengths), self.n_features)
h = hmm.GaussianHMM(self.n_components, self.covariance_type,
implementation=implementation)
# This shouldn't raise
# ValueError: zero-size array to reduction operation maximum which
# has no identity
> h.fit(X, lengths=lengths)
hmmlearn/tests/test_gaussian_hmm.py:169:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GaussianHMM(n_components=3)
X = array([[0.76545582, 0.01178803, 0.61194334],
[0.33188226, 0.55964837, 0.33549965],
[0.41118255, 0.076855...7 , 0.13129105, 0.84281793],
[0.6590363 , 0.5954396 , 0.4363537 ],
[0.35625033, 0.58713093, 0.14947134]])
def _fit_log(self, X):
log_frameprob = self._compute_log_likelihood(X)
> log_prob, fwdlattice = _hmmc.forward_log(
self.startprob_, self.transmat_, log_frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:864: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
______ TestGaussianHMMWithDiagonalCovars.test_fit_zero_variance[scaling] _______
self = <hmmlearn.tests.test_gaussian_hmm.TestGaussianHMMWithDiagonalCovars object at 0xffff79441ac0>
implementation = 'scaling'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_fit_zero_variance(self, implementation):
# Example from issue #2 on GitHub.
X = np.asarray([
[7.15000000e+02, 5.85000000e+02, 0.00000000e+00, 0.00000000e+00],
[7.15000000e+02, 5.20000000e+02, 1.04705811e+00, -6.03696289e+01],
[7.15000000e+02, 4.55000000e+02, 7.20886230e-01, -5.27055664e+01],
[7.15000000e+02, 3.90000000e+02, -4.57946777e-01, -7.80605469e+01],
[7.15000000e+02, 3.25000000e+02, -6.43127441e+00, -5.59954834e+01],
[7.15000000e+02, 2.60000000e+02, -2.90063477e+00, -7.80220947e+01],
[7.15000000e+02, 1.95000000e+02, 8.45532227e+00, -7.03294373e+01],
[7.15000000e+02, 1.30000000e+02, 4.09387207e+00, -5.83621216e+01],
[7.15000000e+02, 6.50000000e+01, -1.21667480e+00, -4.48131409e+01]
])
h = hmm.GaussianHMM(3, self.covariance_type,
implementation=implementation)
> h.fit(X)
hmmlearn/tests/test_gaussian_hmm.py:187:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GaussianHMM(implementation='scaling', n_components=3)
X = array([[ 7.15000000e+02, 5.85000000e+02, 0.00000000e+00,
0.00000000e+00],
[ 7.15000000e+02, 5.20000...07e+00,
-5.83621216e+01],
[ 7.15000000e+02, 6.50000000e+01, -1.21667480e+00,
-4.48131409e+01]])
def _fit_scaling(self, X):
frameprob = self._compute_likelihood(X)
> log_prob, fwdlattice, scaling_factors = _hmmc.forward_scaling(
self.startprob_, self.transmat_, frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:855: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
________ TestGaussianHMMWithDiagonalCovars.test_fit_zero_variance[log] _________
self = <hmmlearn.tests.test_gaussian_hmm.TestGaussianHMMWithDiagonalCovars object at 0xffff79440dd0>
implementation = 'log'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_fit_zero_variance(self, implementation):
# Example from issue #2 on GitHub.
X = np.asarray([
[7.15000000e+02, 5.85000000e+02, 0.00000000e+00, 0.00000000e+00],
[7.15000000e+02, 5.20000000e+02, 1.04705811e+00, -6.03696289e+01],
[7.15000000e+02, 4.55000000e+02, 7.20886230e-01, -5.27055664e+01],
[7.15000000e+02, 3.90000000e+02, -4.57946777e-01, -7.80605469e+01],
[7.15000000e+02, 3.25000000e+02, -6.43127441e+00, -5.59954834e+01],
[7.15000000e+02, 2.60000000e+02, -2.90063477e+00, -7.80220947e+01],
[7.15000000e+02, 1.95000000e+02, 8.45532227e+00, -7.03294373e+01],
[7.15000000e+02, 1.30000000e+02, 4.09387207e+00, -5.83621216e+01],
[7.15000000e+02, 6.50000000e+01, -1.21667480e+00, -4.48131409e+01]
])
h = hmm.GaussianHMM(3, self.covariance_type,
implementation=implementation)
> h.fit(X)
hmmlearn/tests/test_gaussian_hmm.py:187:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GaussianHMM(n_components=3)
X = array([[ 7.15000000e+02, 5.85000000e+02, 0.00000000e+00,
0.00000000e+00],
[ 7.15000000e+02, 5.20000...07e+00,
-5.83621216e+01],
[ 7.15000000e+02, 6.50000000e+01, -1.21667480e+00,
-4.48131409e+01]])
def _fit_log(self, X):
log_frameprob = self._compute_log_likelihood(X)
> log_prob, fwdlattice = _hmmc.forward_log(
self.startprob_, self.transmat_, log_frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:864: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
_______ TestGaussianHMMWithDiagonalCovars.test_fit_with_priors[scaling] ________
self = <hmmlearn.tests.test_gaussian_hmm.TestGaussianHMMWithDiagonalCovars object at 0xffff794403b0>
implementation = 'scaling', init_params = 'mc', params = 'stmc', n_iter = 20
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_fit_with_priors(self, implementation, init_params='mc',
params='stmc', n_iter=20):
# We have a few options to make this a robust test, such as
# a. increase the amount of training data to ensure convergence
# b. Only learn some of the parameters (simplify the problem)
# c. Increase the number of iterations
#
# (c) seems to not affect the ci/cd time too much.
startprob_prior = 10 * self.startprob + 2.0
transmat_prior = 10 * self.transmat + 2.0
means_prior = self.means
means_weight = 2.0
covars_weight = 2.0
if self.covariance_type in ('full', 'tied'):
covars_weight += self.n_features
covars_prior = self.covars
h = hmm.GaussianHMM(self.n_components, self.covariance_type,
implementation=implementation)
h.startprob_ = self.startprob
h.startprob_prior = startprob_prior
h.transmat_ = normalized(
self.transmat + np.diag(self.prng.rand(self.n_components)), 1)
h.transmat_prior = transmat_prior
h.means_ = 20 * self.means
h.means_prior = means_prior
h.means_weight = means_weight
h.covars_ = self.covars
h.covars_prior = covars_prior
h.covars_weight = covars_weight
lengths = [200] * 10
X, _state_sequence = h.sample(sum(lengths), random_state=self.prng)
# Re-initialize the parameters and check that we can converge to
# the original parameter values.
h_learn = hmm.GaussianHMM(self.n_components, self.covariance_type,
init_params=init_params, params=params,
implementation=implementation,)
# don't use random parameters for testing
init = 1. / h_learn.n_components
h_learn.startprob_ = np.full(h_learn.n_components, init)
h_learn.transmat_ = \
np.full((h_learn.n_components, h_learn.n_components), init)
h_learn.n_iter = 0
h_learn.fit(X, lengths=lengths)
> assert_log_likelihood_increasing(h_learn, X, lengths, n_iter)
hmmlearn/tests/test_gaussian_hmm.py:237:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/tests/__init__.py:46: in assert_log_likelihood_increasing
h.fit(X, lengths=lengths)
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GaussianHMM(implementation='scaling', init_params='', n_components=3, n_iter=1)
X = array([[-180.10548806, 79.65731382, 260.1106488 ],
[-240.1207402 , 319.97139868, -120.01791514],
[-2...8371198 ],
[-180.26646139, 79.7657748 , 259.85521097],
[-239.93733261, 319.93811216, -119.84462714]])
def _fit_scaling(self, X):
frameprob = self._compute_likelihood(X)
> log_prob, fwdlattice, scaling_factors = _hmmc.forward_scaling(
self.startprob_, self.transmat_, frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:855: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
_________ TestGaussianHMMWithDiagonalCovars.test_fit_with_priors[log] __________
self = <hmmlearn.tests.test_gaussian_hmm.TestGaussianHMMWithDiagonalCovars object at 0xffff79442ba0>
implementation = 'log', init_params = 'mc', params = 'stmc', n_iter = 20
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_fit_with_priors(self, implementation, init_params='mc',
params='stmc', n_iter=20):
# We have a few options to make this a robust test, such as
# a. increase the amount of training data to ensure convergence
# b. Only learn some of the parameters (simplify the problem)
# c. Increase the number of iterations
#
# (c) seems to not affect the ci/cd time too much.
startprob_prior = 10 * self.startprob + 2.0
transmat_prior = 10 * self.transmat + 2.0
means_prior = self.means
means_weight = 2.0
covars_weight = 2.0
if self.covariance_type in ('full', 'tied'):
covars_weight += self.n_features
covars_prior = self.covars
h = hmm.GaussianHMM(self.n_components, self.covariance_type,
implementation=implementation)
h.startprob_ = self.startprob
h.startprob_prior = startprob_prior
h.transmat_ = normalized(
self.transmat + np.diag(self.prng.rand(self.n_components)), 1)
h.transmat_prior = transmat_prior
h.means_ = 20 * self.means
h.means_prior = means_prior
h.means_weight = means_weight
h.covars_ = self.covars
h.covars_prior = covars_prior
h.covars_weight = covars_weight
lengths = [200] * 10
X, _state_sequence = h.sample(sum(lengths), random_state=self.prng)
# Re-initialize the parameters and check that we can converge to
# the original parameter values.
h_learn = hmm.GaussianHMM(self.n_components, self.covariance_type,
init_params=init_params, params=params,
implementation=implementation,)
# don't use random parameters for testing
init = 1. / h_learn.n_components
h_learn.startprob_ = np.full(h_learn.n_components, init)
h_learn.transmat_ = \
np.full((h_learn.n_components, h_learn.n_components), init)
h_learn.n_iter = 0
h_learn.fit(X, lengths=lengths)
> assert_log_likelihood_increasing(h_learn, X, lengths, n_iter)
hmmlearn/tests/test_gaussian_hmm.py:237:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/tests/__init__.py:46: in assert_log_likelihood_increasing
h.fit(X, lengths=lengths)
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GaussianHMM(init_params='', n_components=3, n_iter=1)
X = array([[-180.10548806, 79.65731382, 260.1106488 ],
[-240.1207402 , 319.97139868, -120.01791514],
[-2...8371198 ],
[-180.26646139, 79.7657748 , 259.85521097],
[-239.93733261, 319.93811216, -119.84462714]])
def _fit_log(self, X):
log_frameprob = self._compute_log_likelihood(X)
> log_prob, fwdlattice = _hmmc.forward_log(
self.startprob_, self.transmat_, log_frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:864: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
________ TestGaussianHMMWithDiagonalCovars.test_fit_left_right[scaling] ________
self = <hmmlearn.tests.test_gaussian_hmm.TestGaussianHMMWithDiagonalCovars object at 0xffff79441190>
implementation = 'scaling'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_fit_left_right(self, implementation):
transmat = np.zeros((self.n_components, self.n_components))
# Left-to-right: each state is connected to itself and its
# direct successor.
for i in range(self.n_components):
if i == self.n_components - 1:
transmat[i, i] = 1.0
else:
transmat[i, i] = transmat[i, i + 1] = 0.5
# Always start in first state
startprob = np.zeros(self.n_components)
startprob[0] = 1.0
lengths = [10, 8, 1]
X = self.prng.rand(sum(lengths), self.n_features)
h = hmm.GaussianHMM(self.n_components, covariance_type="diag",
params="mct", init_params="cm",
implementation=implementation)
h.startprob_ = startprob.copy()
h.transmat_ = transmat.copy()
> h.fit(X)
hmmlearn/tests/test_gaussian_hmm.py:343:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GaussianHMM(implementation='scaling', init_params='cm', n_components=3,
params='mct')
X = array([[0.76545582, 0.01178803, 0.61194334],
[0.33188226, 0.55964837, 0.33549965],
[0.41118255, 0.076855...88, 0.35095822, 0.70533161],
[0.82070374, 0.134563 , 0.60472616],
[0.28314828, 0.50640782, 0.03846043]])
def _fit_scaling(self, X):
frameprob = self._compute_likelihood(X)
> log_prob, fwdlattice, scaling_factors = _hmmc.forward_scaling(
self.startprob_, self.transmat_, frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:855: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
__________ TestGaussianHMMWithDiagonalCovars.test_fit_left_right[log] __________
self = <hmmlearn.tests.test_gaussian_hmm.TestGaussianHMMWithDiagonalCovars object at 0xffff79441310>
implementation = 'log'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_fit_left_right(self, implementation):
transmat = np.zeros((self.n_components, self.n_components))
# Left-to-right: each state is connected to itself and its
# direct successor.
for i in range(self.n_components):
if i == self.n_components - 1:
transmat[i, i] = 1.0
else:
transmat[i, i] = transmat[i, i + 1] = 0.5
# Always start in first state
startprob = np.zeros(self.n_components)
startprob[0] = 1.0
lengths = [10, 8, 1]
X = self.prng.rand(sum(lengths), self.n_features)
h = hmm.GaussianHMM(self.n_components, covariance_type="diag",
params="mct", init_params="cm",
implementation=implementation)
h.startprob_ = startprob.copy()
h.transmat_ = transmat.copy()
> h.fit(X)
hmmlearn/tests/test_gaussian_hmm.py:343:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GaussianHMM(init_params='cm', n_components=3, params='mct')
X = array([[0.76545582, 0.01178803, 0.61194334],
[0.33188226, 0.55964837, 0.33549965],
[0.41118255, 0.076855...88, 0.35095822, 0.70533161],
[0.82070374, 0.134563 , 0.60472616],
[0.28314828, 0.50640782, 0.03846043]])
def _fit_log(self, X):
log_frameprob = self._compute_log_likelihood(X)
> log_prob, fwdlattice = _hmmc.forward_log(
self.startprob_, self.transmat_, log_frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:864: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
_____ TestGaussianHMMWithTiedCovars.test_score_samples_and_decode[scaling] _____
self = <hmmlearn.tests.test_gaussian_hmm.TestGaussianHMMWithTiedCovars object at 0xffff79443170>
implementation = 'scaling'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_score_samples_and_decode(self, implementation):
h = hmm.GaussianHMM(self.n_components, self.covariance_type,
init_params="st", implementation=implementation)
h.means_ = self.means
h.covars_ = self.covars
# Make sure the means are far apart so posteriors.argmax()
# picks the actual component used to generate the observations.
h.means_ = 20 * h.means_
gaussidx = np.repeat(np.arange(self.n_components), 5)
n_samples = len(gaussidx)
X = (self.prng.randn(n_samples, self.n_features)
+ h.means_[gaussidx])
h._init(X, [n_samples])
> ll, posteriors = h.score_samples(X)
hmmlearn/tests/test_gaussian_hmm.py:52:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:201: in score_samples
return self._score(X, lengths, compute_posteriors=True)
hmmlearn/base.py:244: in _score
return impl(
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GaussianHMM(covariance_type='tied', implementation='scaling', init_params='st',
n_components=3)
X = array([[-178.59797475, 79.5657409 , 258.74575809],
[-179.66842145, 79.69139951, 259.84626451],
[-1...49160395],
[-141.24628501, -300.37993208, -140.27125813],
[-141.45463451, -299.54832455, -138.87519327]])
lengths = None
def _score_scaling(self, X, lengths=None, *, compute_posteriors):
log_prob = 0
sub_posteriors = [np.empty((0, self.n_components))]
for sub_X in _utils.split_X_lengths(X, lengths):
frameprob = self._compute_likelihood(sub_X)
> log_probij, fwdlattice, scaling_factors = _hmmc.forward_scaling(
self.startprob_, self.transmat_, frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:272: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
_______ TestGaussianHMMWithTiedCovars.test_score_samples_and_decode[log] _______
self = <hmmlearn.tests.test_gaussian_hmm.TestGaussianHMMWithTiedCovars object at 0xffff79443350>
implementation = 'log'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_score_samples_and_decode(self, implementation):
h = hmm.GaussianHMM(self.n_components, self.covariance_type,
init_params="st", implementation=implementation)
h.means_ = self.means
h.covars_ = self.covars
# Make sure the means are far apart so posteriors.argmax()
# picks the actual component used to generate the observations.
h.means_ = 20 * h.means_
gaussidx = np.repeat(np.arange(self.n_components), 5)
n_samples = len(gaussidx)
X = (self.prng.randn(n_samples, self.n_features)
+ h.means_[gaussidx])
h._init(X, [n_samples])
> ll, posteriors = h.score_samples(X)
hmmlearn/tests/test_gaussian_hmm.py:52:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:201: in score_samples
return self._score(X, lengths, compute_posteriors=True)
hmmlearn/base.py:244: in _score
return impl(
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GaussianHMM(covariance_type='tied', init_params='st', n_components=3)
X = array([[-178.59797475, 79.5657409 , 258.74575809],
[-179.66842145, 79.69139951, 259.84626451],
[-1...49160395],
[-141.24628501, -300.37993208, -140.27125813],
[-141.45463451, -299.54832455, -138.87519327]])
lengths = None
def _score_log(self, X, lengths=None, *, compute_posteriors):
"""
Compute the log probability under the model, as well as posteriors if
*compute_posteriors* is True (otherwise, an empty array is returned
for the latter).
"""
log_prob = 0
sub_posteriors = [np.empty((0, self.n_components))]
for sub_X in _utils.split_X_lengths(X, lengths):
log_frameprob = self._compute_log_likelihood(sub_X)
> log_probij, fwdlattice = _hmmc.forward_log(
self.startprob_, self.transmat_, log_frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:257: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
_______________ TestGaussianHMMWithTiedCovars.test_fit[scaling] ________________
self = <hmmlearn.tests.test_gaussian_hmm.TestGaussianHMMWithTiedCovars object at 0xffff79443920>
implementation = 'scaling', params = 'stmc', n_iter = 5, kwargs = {}
h = GaussianHMM(covariance_type='tied', implementation='scaling', n_components=3)
lengths = [10, 10, 10, 10, 10, 10, ...]
X = array([[-179.8260405 , 79.4514966 , 259.79962735],
[-179.27112871, 77.62480122, 257.61088571],
[-2...11164467],
[-140.39578223, -298.97133932, -139.09581454],
[-240.0466289 , 319.62253731, -120.17464663]])
_state_sequence = array([0, 0, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, 1, 1, 0, 0, 0, 0, 1, 1, 1, 2,
2, 2, 1, 1, 1, 1, 2, 2, 2, 2, 2, 2, 2,...1,
1, 1, 1, 0, 1, 1, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1,
1, 1, 0, 0, 0, 0, 0, 1, 1, 0, 2, 1])
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_fit(self, implementation, params='stmc', n_iter=5, **kwargs):
h = hmm.GaussianHMM(self.n_components, self.covariance_type,
implementation=implementation)
h.startprob_ = self.startprob
h.transmat_ = normalized(
self.transmat + np.diag(self.prng.rand(self.n_components)), 1)
h.means_ = 20 * self.means
h.covars_ = self.covars
lengths = [10] * 10
X, _state_sequence = h.sample(sum(lengths), random_state=self.prng)
# Mess up the parameters and see if we can re-learn them.
# TODO: change the params and uncomment the check
> h.fit(X, lengths=lengths)
hmmlearn/tests/test_gaussian_hmm.py:89:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GaussianHMM(covariance_type='tied', implementation='scaling', n_components=3)
X = array([[-179.8260405 , 79.4514966 , 259.79962735],
[-179.27112871, 77.62480122, 257.61088571],
[-2...00041362],
[-179.87638647, 78.63507165, 258.69188309],
[-239.68603927, 320.02944216, -120.34115912]])
def _fit_scaling(self, X):
frameprob = self._compute_likelihood(X)
> log_prob, fwdlattice, scaling_factors = _hmmc.forward_scaling(
self.startprob_, self.transmat_, frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:855: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
------------------------------ Captured log call -------------------------------
WARNING hmmlearn.base:base.py:514 Even though the 'startprob_' attribute is set, it will be overwritten during initialization because 'init_params' contains 's'
WARNING hmmlearn.base:base.py:514 Even though the 'transmat_' attribute is set, it will be overwritten during initialization because 'init_params' contains 't'
WARNING hmmlearn.base:base.py:514 Even though the 'means_' attribute is set, it will be overwritten during initialization because 'init_params' contains 'm'
WARNING hmmlearn.base:base.py:514 Even though the 'covars_' attribute is set, it will be overwritten during initialization because 'init_params' contains 'c'
_________________ TestGaussianHMMWithTiedCovars.test_fit[log] __________________
self = <hmmlearn.tests.test_gaussian_hmm.TestGaussianHMMWithTiedCovars object at 0xffff79443aa0>
implementation = 'log', params = 'stmc', n_iter = 5, kwargs = {}
h = GaussianHMM(covariance_type='tied', n_components=3)
lengths = [10, 10, 10, 10, 10, 10, ...]
X = array([[-179.55236491, 79.62771424, 259.7608728 ],
[-177.97187596, 78.68808744, 257.41532 ],
[-2...15660559],
[-140.96121886, -299.45454742, -139.02843535],
[-239.88218034, 319.64809751, -120.23371888]])
_state_sequence = array([0, 0, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, 1, 1, 0, 0, 0, 0, 1, 1, 1, 2,
2, 2, 1, 1, 1, 1, 2, 2, 2, 2, 2, 2, 2,...1,
1, 1, 1, 0, 1, 1, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1,
1, 1, 0, 0, 0, 0, 0, 1, 1, 0, 2, 1])
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_fit(self, implementation, params='stmc', n_iter=5, **kwargs):
h = hmm.GaussianHMM(self.n_components, self.covariance_type,
implementation=implementation)
h.startprob_ = self.startprob
h.transmat_ = normalized(
self.transmat + np.diag(self.prng.rand(self.n_components)), 1)
h.means_ = 20 * self.means
h.covars_ = self.covars
lengths = [10] * 10
X, _state_sequence = h.sample(sum(lengths), random_state=self.prng)
# Mess up the parameters and see if we can re-learn them.
# TODO: change the params and uncomment the check
> h.fit(X, lengths=lengths)
hmmlearn/tests/test_gaussian_hmm.py:89:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GaussianHMM(covariance_type='tied', n_components=3)
X = array([[-179.55236491, 79.62771424, 259.7608728 ],
[-177.97187596, 78.68808744, 257.41532 ],
[-2...9192875 ],
[-179.18224786, 79.04927119, 258.52396052],
[-239.62144911, 320.29375504, -120.28252121]])
def _fit_log(self, X):
log_frameprob = self._compute_log_likelihood(X)
> log_prob, fwdlattice = _hmmc.forward_log(
self.startprob_, self.transmat_, log_frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:864: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
------------------------------ Captured log call -------------------------------
WARNING hmmlearn.base:base.py:514 Even though the 'startprob_' attribute is set, it will be overwritten during initialization because 'init_params' contains 's'
WARNING hmmlearn.base:base.py:514 Even though the 'transmat_' attribute is set, it will be overwritten during initialization because 'init_params' contains 't'
WARNING hmmlearn.base:base.py:514 Even though the 'means_' attribute is set, it will be overwritten during initialization because 'init_params' contains 'm'
WARNING hmmlearn.base:base.py:514 Even though the 'covars_' attribute is set, it will be overwritten during initialization because 'init_params' contains 'c'
____________ TestGaussianHMMWithTiedCovars.test_criterion[scaling] _____________
self = <hmmlearn.tests.test_gaussian_hmm.TestGaussianHMMWithTiedCovars object at 0xffff79443ce0>
implementation = 'scaling'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_criterion(self, implementation):
random_state = check_random_state(42)
m1 = hmm.GaussianHMM(self.n_components, init_params="",
covariance_type=self.covariance_type)
m1.startprob_ = self.startprob
m1.transmat_ = self.transmat
m1.means_ = self.means * 10
m1.covars_ = self.covars
X, _ = m1.sample(2000, random_state=random_state)
aic = []
bic = []
ns = [2, 3, 4]
for n in ns:
h = hmm.GaussianHMM(n, self.covariance_type, n_iter=500,
random_state=random_state, implementation=implementation)
> h.fit(X)
hmmlearn/tests/test_gaussian_hmm.py:110:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GaussianHMM(covariance_type='tied', implementation='scaling', n_components=2,
n_iter=500, random_state=RandomState(MT19937) at 0xFFFF78F8AA40)
X = array([[ -88.93768168, 38.7863479 , 128.58190897],
[-121.06604677, 160.57781114, -58.39821038],
[-1...01375876],
[-118.3170641 , 160.08674499, -60.61250236],
[ -89.6545911 , 40.95156317, 130.074469 ]])
def _fit_scaling(self, X):
frameprob = self._compute_likelihood(X)
> log_prob, fwdlattice, scaling_factors = _hmmc.forward_scaling(
self.startprob_, self.transmat_, frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:855: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
______________ TestGaussianHMMWithTiedCovars.test_criterion[log] _______________
self = <hmmlearn.tests.test_gaussian_hmm.TestGaussianHMMWithTiedCovars object at 0xffff79443ec0>
implementation = 'log'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_criterion(self, implementation):
random_state = check_random_state(42)
m1 = hmm.GaussianHMM(self.n_components, init_params="",
covariance_type=self.covariance_type)
m1.startprob_ = self.startprob
m1.transmat_ = self.transmat
m1.means_ = self.means * 10
m1.covars_ = self.covars
X, _ = m1.sample(2000, random_state=random_state)
aic = []
bic = []
ns = [2, 3, 4]
for n in ns:
h = hmm.GaussianHMM(n, self.covariance_type, n_iter=500,
random_state=random_state, implementation=implementation)
> h.fit(X)
hmmlearn/tests/test_gaussian_hmm.py:110:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GaussianHMM(covariance_type='tied', n_components=2, n_iter=500,
random_state=RandomState(MT19937) at 0xFFFF78F8B540)
X = array([[ -89.60564785, 41.39352589, 131.49829139],
[-120.09100691, 158.47783538, -61.18173684],
[-1...81249757],
[-120.72047453, 160.6299336 , -58.70398427],
[ -90.67399267, 39.8969804 , 129.78240494]])
def _fit_log(self, X):
log_frameprob = self._compute_log_likelihood(X)
> log_prob, fwdlattice = _hmmc.forward_log(
self.startprob_, self.transmat_, log_frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:864: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
______ TestGaussianHMMWithTiedCovars.test_fit_ignored_init_warns[scaling] ______
self = <hmmlearn.tests.test_gaussian_hmm.TestGaussianHMMWithTiedCovars object at 0xffff79443800>
implementation = 'scaling'
caplog = <_pytest.logging.LogCaptureFixture object at 0xffff79083470>
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_fit_ignored_init_warns(self, implementation, caplog):
# This test occasionally will be flaky in learning the model.
# What is important here, is that the expected log message is produced
# We can test convergence properties elsewhere.
h = hmm.GaussianHMM(self.n_components, self.covariance_type,
implementation=implementation)
h.startprob_ = self.startprob
> h.fit(self.prng.randn(100, self.n_components))
hmmlearn/tests/test_gaussian_hmm.py:128:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GaussianHMM(covariance_type='tied', implementation='scaling', n_components=3)
X = array([[ 1.40202525e+00, -4.34259100e-01, -1.25424191e+00],
[ 3.31578554e-01, -3.08600486e-01, -1.53735485e-01]... [ 6.04857190e-01, -2.51936017e-01, 8.99130290e-01],
[ 1.60788687e+00, -1.30106516e+00, 7.60125909e-01]])
def _fit_scaling(self, X):
frameprob = self._compute_likelihood(X)
> log_prob, fwdlattice, scaling_factors = _hmmc.forward_scaling(
self.startprob_, self.transmat_, frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:855: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
------------------------------ Captured log call -------------------------------
WARNING hmmlearn.base:base.py:514 Even though the 'startprob_' attribute is set, it will be overwritten during initialization because 'init_params' contains 's'
________ TestGaussianHMMWithTiedCovars.test_fit_ignored_init_warns[log] ________
self = <hmmlearn.tests.test_gaussian_hmm.TestGaussianHMMWithTiedCovars object at 0xffff79442cf0>
implementation = 'log'
caplog = <_pytest.logging.LogCaptureFixture object at 0xffff78ee0080>
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_fit_ignored_init_warns(self, implementation, caplog):
# This test occasionally will be flaky in learning the model.
# What is important here, is that the expected log message is produced
# We can test convergence properties elsewhere.
h = hmm.GaussianHMM(self.n_components, self.covariance_type,
implementation=implementation)
h.startprob_ = self.startprob
> h.fit(self.prng.randn(100, self.n_components))
hmmlearn/tests/test_gaussian_hmm.py:128:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GaussianHMM(covariance_type='tied', n_components=3)
X = array([[ 1.40202525e+00, -4.34259100e-01, -1.25424191e+00],
[ 3.31578554e-01, -3.08600486e-01, -1.53735485e-01]... [ 6.04857190e-01, -2.51936017e-01, 8.99130290e-01],
[ 1.60788687e+00, -1.30106516e+00, 7.60125909e-01]])
def _fit_log(self, X):
log_frameprob = self._compute_log_likelihood(X)
> log_prob, fwdlattice = _hmmc.forward_log(
self.startprob_, self.transmat_, log_frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:864: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
------------------------------ Captured log call -------------------------------
WARNING hmmlearn.base:base.py:514 Even though the 'startprob_' attribute is set, it will be overwritten during initialization because 'init_params' contains 's'
_ TestGaussianHMMWithTiedCovars.test_fit_sequences_of_different_length[scaling] _
self = <hmmlearn.tests.test_gaussian_hmm.TestGaussianHMMWithTiedCovars object at 0xffff79468080>
implementation = 'scaling'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_fit_sequences_of_different_length(self, implementation):
lengths = [3, 4, 5]
X = self.prng.rand(sum(lengths), self.n_features)
h = hmm.GaussianHMM(self.n_components, self.covariance_type,
implementation=implementation)
# This shouldn't raise
# ValueError: setting an array element with a sequence.
> h.fit(X, lengths=lengths)
hmmlearn/tests/test_gaussian_hmm.py:157:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GaussianHMM(covariance_type='tied', implementation='scaling', n_components=3)
X = array([[0.41366737, 0.77872881, 0.58390137],
[0.18263144, 0.82608225, 0.10540183],
[0.28357668, 0.06556327, 0.05644419]])
def _fit_scaling(self, X):
frameprob = self._compute_likelihood(X)
> log_prob, fwdlattice, scaling_factors = _hmmc.forward_scaling(
self.startprob_, self.transmat_, frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:855: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
__ TestGaussianHMMWithTiedCovars.test_fit_sequences_of_different_length[log] ___
self = <hmmlearn.tests.test_gaussian_hmm.TestGaussianHMMWithTiedCovars object at 0xffff79468230>
implementation = 'log'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_fit_sequences_of_different_length(self, implementation):
lengths = [3, 4, 5]
X = self.prng.rand(sum(lengths), self.n_features)
h = hmm.GaussianHMM(self.n_components, self.covariance_type,
implementation=implementation)
# This shouldn't raise
# ValueError: setting an array element with a sequence.
> h.fit(X, lengths=lengths)
hmmlearn/tests/test_gaussian_hmm.py:157:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GaussianHMM(covariance_type='tied', n_components=3)
X = array([[0.41366737, 0.77872881, 0.58390137],
[0.18263144, 0.82608225, 0.10540183],
[0.28357668, 0.06556327, 0.05644419]])
def _fit_log(self, X):
log_frameprob = self._compute_log_likelihood(X)
> log_prob, fwdlattice = _hmmc.forward_log(
self.startprob_, self.transmat_, log_frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:864: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
____ TestGaussianHMMWithTiedCovars.test_fit_with_length_one_signal[scaling] ____
self = <hmmlearn.tests.test_gaussian_hmm.TestGaussianHMMWithTiedCovars object at 0xffff79468470>
implementation = 'scaling'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_fit_with_length_one_signal(self, implementation):
lengths = [10, 8, 1]
X = self.prng.rand(sum(lengths), self.n_features)
h = hmm.GaussianHMM(self.n_components, self.covariance_type,
implementation=implementation)
# This shouldn't raise
# ValueError: zero-size array to reduction operation maximum which
# has no identity
> h.fit(X, lengths=lengths)
hmmlearn/tests/test_gaussian_hmm.py:169:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GaussianHMM(covariance_type='tied', implementation='scaling', n_components=3)
X = array([[0.41366737, 0.77872881, 0.58390137],
[0.18263144, 0.82608225, 0.10540183],
[0.28357668, 0.065563...47, 0.76688005, 0.83198977],
[0.30977806, 0.59758229, 0.87239246],
[0.98302087, 0.46740328, 0.87574449]])
def _fit_scaling(self, X):
frameprob = self._compute_likelihood(X)
> log_prob, fwdlattice, scaling_factors = _hmmc.forward_scaling(
self.startprob_, self.transmat_, frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:855: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
______ TestGaussianHMMWithTiedCovars.test_fit_with_length_one_signal[log] ______
self = <hmmlearn.tests.test_gaussian_hmm.TestGaussianHMMWithTiedCovars object at 0xffff794685f0>
implementation = 'log'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_fit_with_length_one_signal(self, implementation):
lengths = [10, 8, 1]
X = self.prng.rand(sum(lengths), self.n_features)
h = hmm.GaussianHMM(self.n_components, self.covariance_type,
implementation=implementation)
# This shouldn't raise
# ValueError: zero-size array to reduction operation maximum which
# has no identity
> h.fit(X, lengths=lengths)
hmmlearn/tests/test_gaussian_hmm.py:169:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GaussianHMM(covariance_type='tied', n_components=3)
X = array([[0.41366737, 0.77872881, 0.58390137],
[0.18263144, 0.82608225, 0.10540183],
[0.28357668, 0.065563...47, 0.76688005, 0.83198977],
[0.30977806, 0.59758229, 0.87239246],
[0.98302087, 0.46740328, 0.87574449]])
def _fit_log(self, X):
log_frameprob = self._compute_log_likelihood(X)
> log_prob, fwdlattice = _hmmc.forward_log(
self.startprob_, self.transmat_, log_frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:864: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
________ TestGaussianHMMWithTiedCovars.test_fit_zero_variance[scaling] _________
self = <hmmlearn.tests.test_gaussian_hmm.TestGaussianHMMWithTiedCovars object at 0xffff79468800>
implementation = 'scaling'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_fit_zero_variance(self, implementation):
# Example from issue #2 on GitHub.
X = np.asarray([
[7.15000000e+02, 5.85000000e+02, 0.00000000e+00, 0.00000000e+00],
[7.15000000e+02, 5.20000000e+02, 1.04705811e+00, -6.03696289e+01],
[7.15000000e+02, 4.55000000e+02, 7.20886230e-01, -5.27055664e+01],
[7.15000000e+02, 3.90000000e+02, -4.57946777e-01, -7.80605469e+01],
[7.15000000e+02, 3.25000000e+02, -6.43127441e+00, -5.59954834e+01],
[7.15000000e+02, 2.60000000e+02, -2.90063477e+00, -7.80220947e+01],
[7.15000000e+02, 1.95000000e+02, 8.45532227e+00, -7.03294373e+01],
[7.15000000e+02, 1.30000000e+02, 4.09387207e+00, -5.83621216e+01],
[7.15000000e+02, 6.50000000e+01, -1.21667480e+00, -4.48131409e+01]
])
h = hmm.GaussianHMM(3, self.covariance_type,
implementation=implementation)
> h.fit(X)
hmmlearn/tests/test_gaussian_hmm.py:187:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GaussianHMM(covariance_type='tied', implementation='scaling', n_components=3)
X = array([[ 7.15000000e+02, 5.85000000e+02, 0.00000000e+00,
0.00000000e+00],
[ 7.15000000e+02, 5.20000...07e+00,
-5.83621216e+01],
[ 7.15000000e+02, 6.50000000e+01, -1.21667480e+00,
-4.48131409e+01]])
def _fit_scaling(self, X):
frameprob = self._compute_likelihood(X)
> log_prob, fwdlattice, scaling_factors = _hmmc.forward_scaling(
self.startprob_, self.transmat_, frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:855: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
__________ TestGaussianHMMWithTiedCovars.test_fit_zero_variance[log] ___________
self = <hmmlearn.tests.test_gaussian_hmm.TestGaussianHMMWithTiedCovars object at 0xffff79468980>
implementation = 'log'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_fit_zero_variance(self, implementation):
# Example from issue #2 on GitHub.
X = np.asarray([
[7.15000000e+02, 5.85000000e+02, 0.00000000e+00, 0.00000000e+00],
[7.15000000e+02, 5.20000000e+02, 1.04705811e+00, -6.03696289e+01],
[7.15000000e+02, 4.55000000e+02, 7.20886230e-01, -5.27055664e+01],
[7.15000000e+02, 3.90000000e+02, -4.57946777e-01, -7.80605469e+01],
[7.15000000e+02, 3.25000000e+02, -6.43127441e+00, -5.59954834e+01],
[7.15000000e+02, 2.60000000e+02, -2.90063477e+00, -7.80220947e+01],
[7.15000000e+02, 1.95000000e+02, 8.45532227e+00, -7.03294373e+01],
[7.15000000e+02, 1.30000000e+02, 4.09387207e+00, -5.83621216e+01],
[7.15000000e+02, 6.50000000e+01, -1.21667480e+00, -4.48131409e+01]
])
h = hmm.GaussianHMM(3, self.covariance_type,
implementation=implementation)
> h.fit(X)
hmmlearn/tests/test_gaussian_hmm.py:187:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GaussianHMM(covariance_type='tied', n_components=3)
X = array([[ 7.15000000e+02, 5.85000000e+02, 0.00000000e+00,
0.00000000e+00],
[ 7.15000000e+02, 5.20000...07e+00,
-5.83621216e+01],
[ 7.15000000e+02, 6.50000000e+01, -1.21667480e+00,
-4.48131409e+01]])
def _fit_log(self, X):
log_frameprob = self._compute_log_likelihood(X)
> log_prob, fwdlattice = _hmmc.forward_log(
self.startprob_, self.transmat_, log_frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:864: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
_________ TestGaussianHMMWithTiedCovars.test_fit_with_priors[scaling] __________
self = <hmmlearn.tests.test_gaussian_hmm.TestGaussianHMMWithTiedCovars object at 0xffff79468b90>
implementation = 'scaling', init_params = 'mc', params = 'stmc', n_iter = 20
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_fit_with_priors(self, implementation, init_params='mc',
params='stmc', n_iter=20):
# We have a few options to make this a robust test, such as
# a. increase the amount of training data to ensure convergence
# b. Only learn some of the parameters (simplify the problem)
# c. Increase the number of iterations
#
# (c) seems to not affect the ci/cd time too much.
startprob_prior = 10 * self.startprob + 2.0
transmat_prior = 10 * self.transmat + 2.0
means_prior = self.means
means_weight = 2.0
covars_weight = 2.0
if self.covariance_type in ('full', 'tied'):
covars_weight += self.n_features
covars_prior = self.covars
h = hmm.GaussianHMM(self.n_components, self.covariance_type,
implementation=implementation)
h.startprob_ = self.startprob
h.startprob_prior = startprob_prior
h.transmat_ = normalized(
self.transmat + np.diag(self.prng.rand(self.n_components)), 1)
h.transmat_prior = transmat_prior
h.means_ = 20 * self.means
h.means_prior = means_prior
h.means_weight = means_weight
h.covars_ = self.covars
h.covars_prior = covars_prior
h.covars_weight = covars_weight
lengths = [200] * 10
X, _state_sequence = h.sample(sum(lengths), random_state=self.prng)
# Re-initialize the parameters and check that we can converge to
# the original parameter values.
h_learn = hmm.GaussianHMM(self.n_components, self.covariance_type,
init_params=init_params, params=params,
implementation=implementation,)
# don't use random parameters for testing
init = 1. / h_learn.n_components
h_learn.startprob_ = np.full(h_learn.n_components, init)
h_learn.transmat_ = \
np.full((h_learn.n_components, h_learn.n_components), init)
h_learn.n_iter = 0
h_learn.fit(X, lengths=lengths)
> assert_log_likelihood_increasing(h_learn, X, lengths, n_iter)
hmmlearn/tests/test_gaussian_hmm.py:237:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/tests/__init__.py:46: in assert_log_likelihood_increasing
h.fit(X, lengths=lengths)
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GaussianHMM(covariance_type='tied', implementation='scaling', init_params='',
n_components=3, n_iter=1)
X = array([[-180.0167318 , 80.49128541, 259.45410712],
[-178.04279102, 81.61267343, 257.1997866 ],
[-2...20799107],
[-241.41137611, 318.85065362, -119.41680246],
[-241.69338115, 317.02826007, -119.84835576]])
def _fit_scaling(self, X):
frameprob = self._compute_likelihood(X)
> log_prob, fwdlattice, scaling_factors = _hmmc.forward_scaling(
self.startprob_, self.transmat_, frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:855: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
___________ TestGaussianHMMWithTiedCovars.test_fit_with_priors[log] ____________
self = <hmmlearn.tests.test_gaussian_hmm.TestGaussianHMMWithTiedCovars object at 0xffff79468d10>
implementation = 'log', init_params = 'mc', params = 'stmc', n_iter = 20
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_fit_with_priors(self, implementation, init_params='mc',
params='stmc', n_iter=20):
# We have a few options to make this a robust test, such as
# a. increase the amount of training data to ensure convergence
# b. Only learn some of the parameters (simplify the problem)
# c. Increase the number of iterations
#
# (c) seems to not affect the ci/cd time too much.
startprob_prior = 10 * self.startprob + 2.0
transmat_prior = 10 * self.transmat + 2.0
means_prior = self.means
means_weight = 2.0
covars_weight = 2.0
if self.covariance_type in ('full', 'tied'):
covars_weight += self.n_features
covars_prior = self.covars
h = hmm.GaussianHMM(self.n_components, self.covariance_type,
implementation=implementation)
h.startprob_ = self.startprob
h.startprob_prior = startprob_prior
h.transmat_ = normalized(
self.transmat + np.diag(self.prng.rand(self.n_components)), 1)
h.transmat_prior = transmat_prior
h.means_ = 20 * self.means
h.means_prior = means_prior
h.means_weight = means_weight
h.covars_ = self.covars
h.covars_prior = covars_prior
h.covars_weight = covars_weight
lengths = [200] * 10
X, _state_sequence = h.sample(sum(lengths), random_state=self.prng)
# Re-initialize the parameters and check that we can converge to
# the original parameter values.
h_learn = hmm.GaussianHMM(self.n_components, self.covariance_type,
init_params=init_params, params=params,
implementation=implementation,)
# don't use random parameters for testing
init = 1. / h_learn.n_components
h_learn.startprob_ = np.full(h_learn.n_components, init)
h_learn.transmat_ = \
np.full((h_learn.n_components, h_learn.n_components), init)
h_learn.n_iter = 0
h_learn.fit(X, lengths=lengths)
> assert_log_likelihood_increasing(h_learn, X, lengths, n_iter)
hmmlearn/tests/test_gaussian_hmm.py:237:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/tests/__init__.py:46: in assert_log_likelihood_increasing
h.fit(X, lengths=lengths)
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GaussianHMM(covariance_type='tied', init_params='', n_components=3, n_iter=1)
X = array([[-179.61814266, 79.48299151, 259.96047098],
[-178.80919992, 77.07844944, 258.36395653],
[-2...72241793],
[-239.94357433, 321.65020823, -119.37212358],
[-239.67551362, 322.95953881, -119.98647686]])
def _fit_log(self, X):
log_frameprob = self._compute_log_likelihood(X)
> log_prob, fwdlattice = _hmmc.forward_log(
self.startprob_, self.transmat_, log_frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:864: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
_____ TestGaussianHMMWithFullCovars.test_score_samples_and_decode[scaling] _____
self = <hmmlearn.tests.test_gaussian_hmm.TestGaussianHMMWithFullCovars object at 0xffff79469370>
implementation = 'scaling'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_score_samples_and_decode(self, implementation):
h = hmm.GaussianHMM(self.n_components, self.covariance_type,
init_params="st", implementation=implementation)
h.means_ = self.means
h.covars_ = self.covars
# Make sure the means are far apart so posteriors.argmax()
# picks the actual component used to generate the observations.
h.means_ = 20 * h.means_
gaussidx = np.repeat(np.arange(self.n_components), 5)
n_samples = len(gaussidx)
X = (self.prng.randn(n_samples, self.n_features)
+ h.means_[gaussidx])
h._init(X, [n_samples])
> ll, posteriors = h.score_samples(X)
hmmlearn/tests/test_gaussian_hmm.py:52:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:201: in score_samples
return self._score(X, lengths, compute_posteriors=True)
hmmlearn/base.py:244: in _score
return impl(
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GaussianHMM(covariance_type='full', implementation='scaling', init_params='st',
n_components=3)
X = array([[-178.91762365, 78.21428218, 259.72766302],
[-180.29036839, 81.65614717, 258.7654313 ],
[-1...12852298],
[-138.75200849, -298.93773986, -141.62863338],
[-139.88757229, -300.99150251, -139.32120466]])
lengths = None
def _score_scaling(self, X, lengths=None, *, compute_posteriors):
log_prob = 0
sub_posteriors = [np.empty((0, self.n_components))]
for sub_X in _utils.split_X_lengths(X, lengths):
frameprob = self._compute_likelihood(sub_X)
> log_probij, fwdlattice, scaling_factors = _hmmc.forward_scaling(
self.startprob_, self.transmat_, frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:272: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
_______ TestGaussianHMMWithFullCovars.test_score_samples_and_decode[log] _______
self = <hmmlearn.tests.test_gaussian_hmm.TestGaussianHMMWithFullCovars object at 0xffff794694f0>
implementation = 'log'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_score_samples_and_decode(self, implementation):
h = hmm.GaussianHMM(self.n_components, self.covariance_type,
init_params="st", implementation=implementation)
h.means_ = self.means
h.covars_ = self.covars
# Make sure the means are far apart so posteriors.argmax()
# picks the actual component used to generate the observations.
h.means_ = 20 * h.means_
gaussidx = np.repeat(np.arange(self.n_components), 5)
n_samples = len(gaussidx)
X = (self.prng.randn(n_samples, self.n_features)
+ h.means_[gaussidx])
h._init(X, [n_samples])
> ll, posteriors = h.score_samples(X)
hmmlearn/tests/test_gaussian_hmm.py:52:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:201: in score_samples
return self._score(X, lengths, compute_posteriors=True)
hmmlearn/base.py:244: in _score
return impl(
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GaussianHMM(covariance_type='full', init_params='st', n_components=3)
X = array([[-178.91762365, 78.21428218, 259.72766302],
[-180.29036839, 81.65614717, 258.7654313 ],
[-1...12852298],
[-138.75200849, -298.93773986, -141.62863338],
[-139.88757229, -300.99150251, -139.32120466]])
lengths = None
def _score_log(self, X, lengths=None, *, compute_posteriors):
"""
Compute the log probability under the model, as well as posteriors if
*compute_posteriors* is True (otherwise, an empty array is returned
for the latter).
"""
log_prob = 0
sub_posteriors = [np.empty((0, self.n_components))]
for sub_X in _utils.split_X_lengths(X, lengths):
log_frameprob = self._compute_log_likelihood(sub_X)
> log_probij, fwdlattice = _hmmc.forward_log(
self.startprob_, self.transmat_, log_frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:257: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
_______________ TestGaussianHMMWithFullCovars.test_fit[scaling] ________________
self = <hmmlearn.tests.test_gaussian_hmm.TestGaussianHMMWithFullCovars object at 0xffff79469b20>
implementation = 'scaling', params = 'stmc', n_iter = 5, kwargs = {}
h = GaussianHMM(covariance_type='full', implementation='scaling', n_components=3)
lengths = [10, 10, 10, 10, 10, 10, ...]
X = array([[-180.58095266, 76.77820758, 260.82999791],
[-179.69582924, 80.01947451, 260.89843931],
[-1...80692975],
[-239.96300261, 321.60437243, -119.98216274],
[-243.00912572, 319.79523591, -120.22074218]])
_state_sequence = array([0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 2, 2, 2, 1, 1, 1, 1, 2, 1,
1, 2, 2, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0,...0,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 1,
1, 0, 2, 0, 1, 1, 1, 1, 1, 0, 1, 1])
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_fit(self, implementation, params='stmc', n_iter=5, **kwargs):
h = hmm.GaussianHMM(self.n_components, self.covariance_type,
implementation=implementation)
h.startprob_ = self.startprob
h.transmat_ = normalized(
self.transmat + np.diag(self.prng.rand(self.n_components)), 1)
h.means_ = 20 * self.means
h.covars_ = self.covars
lengths = [10] * 10
X, _state_sequence = h.sample(sum(lengths), random_state=self.prng)
# Mess up the parameters and see if we can re-learn them.
# TODO: change the params and uncomment the check
> h.fit(X, lengths=lengths)
hmmlearn/tests/test_gaussian_hmm.py:89:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GaussianHMM(covariance_type='full', implementation='scaling', n_components=3)
X = array([[-180.58095266, 76.77820758, 260.82999791],
[-179.69582924, 80.01947451, 260.89843931],
[-1...69639643],
[-180.66903787, 80.96928136, 260.49492216],
[-179.21107272, 83.36164545, 259.72145566]])
def _fit_scaling(self, X):
frameprob = self._compute_likelihood(X)
> log_prob, fwdlattice, scaling_factors = _hmmc.forward_scaling(
self.startprob_, self.transmat_, frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:855: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
------------------------------ Captured log call -------------------------------
WARNING hmmlearn.base:base.py:514 Even though the 'startprob_' attribute is set, it will be overwritten during initialization because 'init_params' contains 's'
WARNING hmmlearn.base:base.py:514 Even though the 'transmat_' attribute is set, it will be overwritten during initialization because 'init_params' contains 't'
WARNING hmmlearn.base:base.py:514 Even though the 'means_' attribute is set, it will be overwritten during initialization because 'init_params' contains 'm'
WARNING hmmlearn.base:base.py:514 Even though the 'covars_' attribute is set, it will be overwritten during initialization because 'init_params' contains 'c'
_________________ TestGaussianHMMWithFullCovars.test_fit[log] __________________
self = <hmmlearn.tests.test_gaussian_hmm.TestGaussianHMMWithFullCovars object at 0xffff79469ca0>
implementation = 'log', params = 'stmc', n_iter = 5, kwargs = {}
h = GaussianHMM(covariance_type='full', n_components=3)
lengths = [10, 10, 10, 10, 10, 10, ...]
X = array([[-180.58095266, 76.77820758, 260.82999791],
[-179.69582924, 80.01947451, 260.89843931],
[-1...80692975],
[-239.96300261, 321.60437243, -119.98216274],
[-243.00912572, 319.79523591, -120.22074218]])
_state_sequence = array([0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 2, 2, 2, 1, 1, 1, 1, 2, 1,
1, 2, 2, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0,...0,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 1,
1, 0, 2, 0, 1, 1, 1, 1, 1, 0, 1, 1])
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_fit(self, implementation, params='stmc', n_iter=5, **kwargs):
h = hmm.GaussianHMM(self.n_components, self.covariance_type,
implementation=implementation)
h.startprob_ = self.startprob
h.transmat_ = normalized(
self.transmat + np.diag(self.prng.rand(self.n_components)), 1)
h.means_ = 20 * self.means
h.covars_ = self.covars
lengths = [10] * 10
X, _state_sequence = h.sample(sum(lengths), random_state=self.prng)
# Mess up the parameters and see if we can re-learn them.
# TODO: change the params and uncomment the check
> h.fit(X, lengths=lengths)
hmmlearn/tests/test_gaussian_hmm.py:89:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GaussianHMM(covariance_type='full', n_components=3)
X = array([[-180.58095266, 76.77820758, 260.82999791],
[-179.69582924, 80.01947451, 260.89843931],
[-1...69639643],
[-180.66903787, 80.96928136, 260.49492216],
[-179.21107272, 83.36164545, 259.72145566]])
def _fit_log(self, X):
log_frameprob = self._compute_log_likelihood(X)
> log_prob, fwdlattice = _hmmc.forward_log(
self.startprob_, self.transmat_, log_frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:864: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
------------------------------ Captured log call -------------------------------
WARNING hmmlearn.base:base.py:514 Even though the 'startprob_' attribute is set, it will be overwritten during initialization because 'init_params' contains 's'
WARNING hmmlearn.base:base.py:514 Even though the 'transmat_' attribute is set, it will be overwritten during initialization because 'init_params' contains 't'
WARNING hmmlearn.base:base.py:514 Even though the 'means_' attribute is set, it will be overwritten during initialization because 'init_params' contains 'm'
WARNING hmmlearn.base:base.py:514 Even though the 'covars_' attribute is set, it will be overwritten during initialization because 'init_params' contains 'c'
____________ TestGaussianHMMWithFullCovars.test_criterion[scaling] _____________
self = <hmmlearn.tests.test_gaussian_hmm.TestGaussianHMMWithFullCovars object at 0xffff79469e80>
implementation = 'scaling'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_criterion(self, implementation):
random_state = check_random_state(42)
m1 = hmm.GaussianHMM(self.n_components, init_params="",
covariance_type=self.covariance_type)
m1.startprob_ = self.startprob
m1.transmat_ = self.transmat
m1.means_ = self.means * 10
m1.covars_ = self.covars
X, _ = m1.sample(2000, random_state=random_state)
aic = []
bic = []
ns = [2, 3, 4]
for n in ns:
h = hmm.GaussianHMM(n, self.covariance_type, n_iter=500,
random_state=random_state, implementation=implementation)
> h.fit(X)
hmmlearn/tests/test_gaussian_hmm.py:110:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GaussianHMM(covariance_type='full', implementation='scaling', n_components=2,
n_iter=500, random_state=RandomState(MT19937) at 0xFFFF78E8A240)
X = array([[ -89.29523181, 41.84500715, 129.19454811],
[-121.84420946, 159.33718199, -59.47865859],
[-1...24221311],
[-119.33776175, 161.48568424, -60.76453046],
[ -89.46135014, 39.43571927, 130.17918399]])
def _fit_scaling(self, X):
frameprob = self._compute_likelihood(X)
> log_prob, fwdlattice, scaling_factors = _hmmc.forward_scaling(
self.startprob_, self.transmat_, frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:855: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
______________ TestGaussianHMMWithFullCovars.test_criterion[log] _______________
self = <hmmlearn.tests.test_gaussian_hmm.TestGaussianHMMWithFullCovars object at 0xffff7946a000>
implementation = 'log'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_criterion(self, implementation):
random_state = check_random_state(42)
m1 = hmm.GaussianHMM(self.n_components, init_params="",
covariance_type=self.covariance_type)
m1.startprob_ = self.startprob
m1.transmat_ = self.transmat
m1.means_ = self.means * 10
m1.covars_ = self.covars
X, _ = m1.sample(2000, random_state=random_state)
aic = []
bic = []
ns = [2, 3, 4]
for n in ns:
h = hmm.GaussianHMM(n, self.covariance_type, n_iter=500,
random_state=random_state, implementation=implementation)
> h.fit(X)
hmmlearn/tests/test_gaussian_hmm.py:110:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GaussianHMM(covariance_type='full', n_components=2, n_iter=500,
random_state=RandomState(MT19937) at 0xFFFF78E8AD40)
X = array([[ -89.29523181, 41.84500715, 129.19454811],
[-121.84420946, 159.33718199, -59.47865859],
[-1...24221311],
[-119.33776175, 161.48568424, -60.76453046],
[ -89.46135014, 39.43571927, 130.17918399]])
def _fit_log(self, X):
log_frameprob = self._compute_log_likelihood(X)
> log_prob, fwdlattice = _hmmc.forward_log(
self.startprob_, self.transmat_, log_frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:864: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
______ TestGaussianHMMWithFullCovars.test_fit_ignored_init_warns[scaling] ______
self = <hmmlearn.tests.test_gaussian_hmm.TestGaussianHMMWithFullCovars object at 0xffff7946a210>
implementation = 'scaling'
caplog = <_pytest.logging.LogCaptureFixture object at 0xffff78e5b380>
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_fit_ignored_init_warns(self, implementation, caplog):
# This test occasionally will be flaky in learning the model.
# What is important here, is that the expected log message is produced
# We can test convergence properties elsewhere.
h = hmm.GaussianHMM(self.n_components, self.covariance_type,
implementation=implementation)
h.startprob_ = self.startprob
> h.fit(self.prng.randn(100, self.n_components))
hmmlearn/tests/test_gaussian_hmm.py:128:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GaussianHMM(covariance_type='full', implementation='scaling', n_components=3)
X = array([[ 1.08237635e+00, -1.78571782e+00, -2.72336983e-01],
[-2.90368389e-01, 1.65614717e+00, -1.23456870e+00]... [-8.50841186e-02, -3.43870735e-01, -6.18822776e-01],
[ 3.90241258e-01, -1.85025630e+00, -9.02633482e-01]])
def _fit_scaling(self, X):
frameprob = self._compute_likelihood(X)
> log_prob, fwdlattice, scaling_factors = _hmmc.forward_scaling(
self.startprob_, self.transmat_, frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:855: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
------------------------------ Captured log call -------------------------------
WARNING hmmlearn.base:base.py:514 Even though the 'startprob_' attribute is set, it will be overwritten during initialization because 'init_params' contains 's'
________ TestGaussianHMMWithFullCovars.test_fit_ignored_init_warns[log] ________
self = <hmmlearn.tests.test_gaussian_hmm.TestGaussianHMMWithFullCovars object at 0xffff79441f70>
implementation = 'log'
caplog = <_pytest.logging.LogCaptureFixture object at 0xffff79082e40>
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_fit_ignored_init_warns(self, implementation, caplog):
# This test occasionally will be flaky in learning the model.
# What is important here, is that the expected log message is produced
# We can test convergence properties elsewhere.
h = hmm.GaussianHMM(self.n_components, self.covariance_type,
implementation=implementation)
h.startprob_ = self.startprob
> h.fit(self.prng.randn(100, self.n_components))
hmmlearn/tests/test_gaussian_hmm.py:128:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GaussianHMM(covariance_type='full', n_components=3)
X = array([[ 1.08237635e+00, -1.78571782e+00, -2.72336983e-01],
[-2.90368389e-01, 1.65614717e+00, -1.23456870e+00]... [-8.50841186e-02, -3.43870735e-01, -6.18822776e-01],
[ 3.90241258e-01, -1.85025630e+00, -9.02633482e-01]])
def _fit_log(self, X):
log_frameprob = self._compute_log_likelihood(X)
> log_prob, fwdlattice = _hmmc.forward_log(
self.startprob_, self.transmat_, log_frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:864: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
------------------------------ Captured log call -------------------------------
WARNING hmmlearn.base:base.py:514 Even though the 'startprob_' attribute is set, it will be overwritten during initialization because 'init_params' contains 's'
_ TestGaussianHMMWithFullCovars.test_fit_sequences_of_different_length[scaling] _
self = <hmmlearn.tests.test_gaussian_hmm.TestGaussianHMMWithFullCovars object at 0xffff7946a9c0>
implementation = 'scaling'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_fit_sequences_of_different_length(self, implementation):
lengths = [3, 4, 5]
X = self.prng.rand(sum(lengths), self.n_features)
h = hmm.GaussianHMM(self.n_components, self.covariance_type,
implementation=implementation)
# This shouldn't raise
# ValueError: setting an array element with a sequence.
> h.fit(X, lengths=lengths)
hmmlearn/tests/test_gaussian_hmm.py:157:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GaussianHMM(covariance_type='full', implementation='scaling', n_components=3)
X = array([[0.35625033, 0.58713093, 0.14947134],
[0.1712386 , 0.39716452, 0.63795156],
[0.37251995, 0.00240676, 0.54881636]])
def _fit_scaling(self, X):
frameprob = self._compute_likelihood(X)
> log_prob, fwdlattice, scaling_factors = _hmmc.forward_scaling(
self.startprob_, self.transmat_, frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:855: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
__ TestGaussianHMMWithFullCovars.test_fit_sequences_of_different_length[log] ___
self = <hmmlearn.tests.test_gaussian_hmm.TestGaussianHMMWithFullCovars object at 0xffff7946a570>
implementation = 'log'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_fit_sequences_of_different_length(self, implementation):
lengths = [3, 4, 5]
X = self.prng.rand(sum(lengths), self.n_features)
h = hmm.GaussianHMM(self.n_components, self.covariance_type,
implementation=implementation)
# This shouldn't raise
# ValueError: setting an array element with a sequence.
> h.fit(X, lengths=lengths)
hmmlearn/tests/test_gaussian_hmm.py:157:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GaussianHMM(covariance_type='full', n_components=3)
X = array([[0.35625033, 0.58713093, 0.14947134],
[0.1712386 , 0.39716452, 0.63795156],
[0.37251995, 0.00240676, 0.54881636]])
def _fit_log(self, X):
log_frameprob = self._compute_log_likelihood(X)
> log_prob, fwdlattice = _hmmc.forward_log(
self.startprob_, self.transmat_, log_frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:864: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
____ TestGaussianHMMWithFullCovars.test_fit_with_length_one_signal[scaling] ____
self = <hmmlearn.tests.test_gaussian_hmm.TestGaussianHMMWithFullCovars object at 0xffff7946b290>
implementation = 'scaling'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_fit_with_length_one_signal(self, implementation):
lengths = [10, 8, 1]
X = self.prng.rand(sum(lengths), self.n_features)
h = hmm.GaussianHMM(self.n_components, self.covariance_type,
implementation=implementation)
# This shouldn't raise
# ValueError: zero-size array to reduction operation maximum which
# has no identity
> h.fit(X, lengths=lengths)
hmmlearn/tests/test_gaussian_hmm.py:169:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GaussianHMM(covariance_type='full', implementation='scaling', n_components=3)
X = array([[0.35625033, 0.58713093, 0.14947134],
[0.1712386 , 0.39716452, 0.63795156],
[0.37251995, 0.002406...88, 0.35095822, 0.70533161],
[0.82070374, 0.134563 , 0.60472616],
[0.28314828, 0.50640782, 0.03846043]])
def _fit_scaling(self, X):
frameprob = self._compute_likelihood(X)
> log_prob, fwdlattice, scaling_factors = _hmmc.forward_scaling(
self.startprob_, self.transmat_, frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:855: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
______ TestGaussianHMMWithFullCovars.test_fit_with_length_one_signal[log] ______
self = <hmmlearn.tests.test_gaussian_hmm.TestGaussianHMMWithFullCovars object at 0xffff7946b4d0>
implementation = 'log'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_fit_with_length_one_signal(self, implementation):
lengths = [10, 8, 1]
X = self.prng.rand(sum(lengths), self.n_features)
h = hmm.GaussianHMM(self.n_components, self.covariance_type,
implementation=implementation)
# This shouldn't raise
# ValueError: zero-size array to reduction operation maximum which
# has no identity
> h.fit(X, lengths=lengths)
hmmlearn/tests/test_gaussian_hmm.py:169:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GaussianHMM(covariance_type='full', n_components=3)
X = array([[0.35625033, 0.58713093, 0.14947134],
[0.1712386 , 0.39716452, 0.63795156],
[0.37251995, 0.002406...88, 0.35095822, 0.70533161],
[0.82070374, 0.134563 , 0.60472616],
[0.28314828, 0.50640782, 0.03846043]])
def _fit_log(self, X):
log_frameprob = self._compute_log_likelihood(X)
> log_prob, fwdlattice = _hmmc.forward_log(
self.startprob_, self.transmat_, log_frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:864: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
________ TestGaussianHMMWithFullCovars.test_fit_zero_variance[scaling] _________
self = <hmmlearn.tests.test_gaussian_hmm.TestGaussianHMMWithFullCovars object at 0xffff7946b740>
implementation = 'scaling'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_fit_zero_variance(self, implementation):
# Example from issue #2 on GitHub.
X = np.asarray([
[7.15000000e+02, 5.85000000e+02, 0.00000000e+00, 0.00000000e+00],
[7.15000000e+02, 5.20000000e+02, 1.04705811e+00, -6.03696289e+01],
[7.15000000e+02, 4.55000000e+02, 7.20886230e-01, -5.27055664e+01],
[7.15000000e+02, 3.90000000e+02, -4.57946777e-01, -7.80605469e+01],
[7.15000000e+02, 3.25000000e+02, -6.43127441e+00, -5.59954834e+01],
[7.15000000e+02, 2.60000000e+02, -2.90063477e+00, -7.80220947e+01],
[7.15000000e+02, 1.95000000e+02, 8.45532227e+00, -7.03294373e+01],
[7.15000000e+02, 1.30000000e+02, 4.09387207e+00, -5.83621216e+01],
[7.15000000e+02, 6.50000000e+01, -1.21667480e+00, -4.48131409e+01]
])
h = hmm.GaussianHMM(3, self.covariance_type,
implementation=implementation)
> h.fit(X)
hmmlearn/tests/test_gaussian_hmm.py:187:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GaussianHMM(covariance_type='full', implementation='scaling', n_components=3)
X = array([[ 7.15000000e+02, 5.85000000e+02, 0.00000000e+00,
0.00000000e+00],
[ 7.15000000e+02, 5.20000...07e+00,
-5.83621216e+01],
[ 7.15000000e+02, 6.50000000e+01, -1.21667480e+00,
-4.48131409e+01]])
def _fit_scaling(self, X):
frameprob = self._compute_likelihood(X)
> log_prob, fwdlattice, scaling_factors = _hmmc.forward_scaling(
self.startprob_, self.transmat_, frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:855: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
------------------------------ Captured log call -------------------------------
WARNING hmmlearn.base:base.py:922 Fitting a model with 50 free scalar parameters with only 36 data points will result in a degenerate solution.
__________ TestGaussianHMMWithFullCovars.test_fit_zero_variance[log] ___________
self = <hmmlearn.tests.test_gaussian_hmm.TestGaussianHMMWithFullCovars object at 0xffff7946b8c0>
implementation = 'log'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_fit_zero_variance(self, implementation):
# Example from issue #2 on GitHub.
X = np.asarray([
[7.15000000e+02, 5.85000000e+02, 0.00000000e+00, 0.00000000e+00],
[7.15000000e+02, 5.20000000e+02, 1.04705811e+00, -6.03696289e+01],
[7.15000000e+02, 4.55000000e+02, 7.20886230e-01, -5.27055664e+01],
[7.15000000e+02, 3.90000000e+02, -4.57946777e-01, -7.80605469e+01],
[7.15000000e+02, 3.25000000e+02, -6.43127441e+00, -5.59954834e+01],
[7.15000000e+02, 2.60000000e+02, -2.90063477e+00, -7.80220947e+01],
[7.15000000e+02, 1.95000000e+02, 8.45532227e+00, -7.03294373e+01],
[7.15000000e+02, 1.30000000e+02, 4.09387207e+00, -5.83621216e+01],
[7.15000000e+02, 6.50000000e+01, -1.21667480e+00, -4.48131409e+01]
])
h = hmm.GaussianHMM(3, self.covariance_type,
implementation=implementation)
> h.fit(X)
hmmlearn/tests/test_gaussian_hmm.py:187:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GaussianHMM(covariance_type='full', n_components=3)
X = array([[ 7.15000000e+02, 5.85000000e+02, 0.00000000e+00,
0.00000000e+00],
[ 7.15000000e+02, 5.20000...07e+00,
-5.83621216e+01],
[ 7.15000000e+02, 6.50000000e+01, -1.21667480e+00,
-4.48131409e+01]])
def _fit_log(self, X):
log_frameprob = self._compute_log_likelihood(X)
> log_prob, fwdlattice = _hmmc.forward_log(
self.startprob_, self.transmat_, log_frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:864: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
------------------------------ Captured log call -------------------------------
WARNING hmmlearn.base:base.py:922 Fitting a model with 50 free scalar parameters with only 36 data points will result in a degenerate solution.
_________ TestGaussianHMMWithFullCovars.test_fit_with_priors[scaling] __________
self = <hmmlearn.tests.test_gaussian_hmm.TestGaussianHMMWithFullCovars object at 0xffff7946bb00>
implementation = 'scaling', init_params = 'mc', params = 'stmc', n_iter = 20
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_fit_with_priors(self, implementation, init_params='mc',
params='stmc', n_iter=20):
# We have a few options to make this a robust test, such as
# a. increase the amount of training data to ensure convergence
# b. Only learn some of the parameters (simplify the problem)
# c. Increase the number of iterations
#
# (c) seems to not affect the ci/cd time too much.
startprob_prior = 10 * self.startprob + 2.0
transmat_prior = 10 * self.transmat + 2.0
means_prior = self.means
means_weight = 2.0
covars_weight = 2.0
if self.covariance_type in ('full', 'tied'):
covars_weight += self.n_features
covars_prior = self.covars
h = hmm.GaussianHMM(self.n_components, self.covariance_type,
implementation=implementation)
h.startprob_ = self.startprob
h.startprob_prior = startprob_prior
h.transmat_ = normalized(
self.transmat + np.diag(self.prng.rand(self.n_components)), 1)
h.transmat_prior = transmat_prior
h.means_ = 20 * self.means
h.means_prior = means_prior
h.means_weight = means_weight
h.covars_ = self.covars
h.covars_prior = covars_prior
h.covars_weight = covars_weight
lengths = [200] * 10
X, _state_sequence = h.sample(sum(lengths), random_state=self.prng)
# Re-initialize the parameters and check that we can converge to
# the original parameter values.
h_learn = hmm.GaussianHMM(self.n_components, self.covariance_type,
init_params=init_params, params=params,
implementation=implementation,)
# don't use random parameters for testing
init = 1. / h_learn.n_components
h_learn.startprob_ = np.full(h_learn.n_components, init)
h_learn.transmat_ = \
np.full((h_learn.n_components, h_learn.n_components), init)
h_learn.n_iter = 0
h_learn.fit(X, lengths=lengths)
> assert_log_likelihood_increasing(h_learn, X, lengths, n_iter)
hmmlearn/tests/test_gaussian_hmm.py:237:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/tests/__init__.py:46: in assert_log_likelihood_increasing
h.fit(X, lengths=lengths)
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GaussianHMM(covariance_type='full', implementation='scaling', init_params='',
n_components=3, n_iter=1)
X = array([[-180.58095266, 76.77820758, 260.82999791],
[-179.69582924, 80.01947451, 260.89843931],
[-1...95169853],
[-239.53275248, 319.24695192, -120.48672946],
[-242.40666906, 318.95372592, -121.39814967]])
def _fit_scaling(self, X):
frameprob = self._compute_likelihood(X)
> log_prob, fwdlattice, scaling_factors = _hmmc.forward_scaling(
self.startprob_, self.transmat_, frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:855: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
___________ TestGaussianHMMWithFullCovars.test_fit_with_priors[log] ____________
self = <hmmlearn.tests.test_gaussian_hmm.TestGaussianHMMWithFullCovars object at 0xffff7946bc80>
implementation = 'log', init_params = 'mc', params = 'stmc', n_iter = 20
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_fit_with_priors(self, implementation, init_params='mc',
params='stmc', n_iter=20):
# We have a few options to make this a robust test, such as
# a. increase the amount of training data to ensure convergence
# b. Only learn some of the parameters (simplify the problem)
# c. Increase the number of iterations
#
# (c) seems to not affect the ci/cd time too much.
startprob_prior = 10 * self.startprob + 2.0
transmat_prior = 10 * self.transmat + 2.0
means_prior = self.means
means_weight = 2.0
covars_weight = 2.0
if self.covariance_type in ('full', 'tied'):
covars_weight += self.n_features
covars_prior = self.covars
h = hmm.GaussianHMM(self.n_components, self.covariance_type,
implementation=implementation)
h.startprob_ = self.startprob
h.startprob_prior = startprob_prior
h.transmat_ = normalized(
self.transmat + np.diag(self.prng.rand(self.n_components)), 1)
h.transmat_prior = transmat_prior
h.means_ = 20 * self.means
h.means_prior = means_prior
h.means_weight = means_weight
h.covars_ = self.covars
h.covars_prior = covars_prior
h.covars_weight = covars_weight
lengths = [200] * 10
X, _state_sequence = h.sample(sum(lengths), random_state=self.prng)
# Re-initialize the parameters and check that we can converge to
# the original parameter values.
h_learn = hmm.GaussianHMM(self.n_components, self.covariance_type,
init_params=init_params, params=params,
implementation=implementation,)
# don't use random parameters for testing
init = 1. / h_learn.n_components
h_learn.startprob_ = np.full(h_learn.n_components, init)
h_learn.transmat_ = \
np.full((h_learn.n_components, h_learn.n_components), init)
h_learn.n_iter = 0
h_learn.fit(X, lengths=lengths)
> assert_log_likelihood_increasing(h_learn, X, lengths, n_iter)
hmmlearn/tests/test_gaussian_hmm.py:237:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/tests/__init__.py:46: in assert_log_likelihood_increasing
h.fit(X, lengths=lengths)
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GaussianHMM(covariance_type='full', init_params='', n_components=3, n_iter=1)
X = array([[-180.58095266, 76.77820758, 260.82999791],
[-179.69582924, 80.01947451, 260.89843931],
[-1...95169853],
[-239.53275248, 319.24695192, -120.48672946],
[-242.40666906, 318.95372592, -121.39814967]])
def _fit_log(self, X):
log_frameprob = self._compute_log_likelihood(X)
> log_prob, fwdlattice = _hmmc.forward_log(
self.startprob_, self.transmat_, log_frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:864: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
_ test_gmmhmm_multi_sequence_fit_invariant_to_sequence_ordering[scaling-diag] __
covariance_type = 'diag', implementation = 'scaling', init_params = 'mcw'
verbose = False
@pytest.mark.parametrize("covariance_type",
["diag", "spherical", "tied", "full"])
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_gmmhmm_multi_sequence_fit_invariant_to_sequence_ordering(
covariance_type, implementation, init_params='mcw', verbose=False
):
"""
Sanity check GMM-HMM fit behaviour when run on multiple sequences
aka multiple frames.
Training data consumed during GMM-HMM fit is packed into a single
array X containing one or more sequences. In the case where
there are two or more input sequences, the ordering that the
sequences are packed into X should not influence the results
of the fit. Major differences in convergence during EM
iterations by merely permuting sequence order in the input
indicates a likely defect in the fit implementation.
Note: the ordering of samples inside a given sequence
is very meaningful, permuting the order of samples would
destroy the the state transition structure in the input data.
See issue 410 on github:
https://github.com/hmmlearn/hmmlearn/issues/410
"""
sequence_data = EXAMPLE_SEQUENCES_ISSUE_410_PRUNED
scores = []
for p in make_permutations(sequence_data):
sequences = sequence_data[p]
X = np.concatenate(sequences)
lengths = [len(seq) for seq in sequences]
model = hmm.GMMHMM(
n_components=2,
n_mix=2,
n_iter=100,
covariance_type=covariance_type,
verbose=verbose,
init_params=init_params,
random_state=1234,
implementation=implementation
)
# don't use random parameters for testing
init = 1. / model.n_components
model.startprob_ = np.full(model.n_components, init)
model.transmat_ = \
np.full((model.n_components, model.n_components), init)
model.monitor_ = StrictMonitor(
model.monitor_.tol,
model.monitor_.n_iter,
model.monitor_.verbose,
)
> model.fit(X, lengths)
hmmlearn/tests/test_gmm_hmm_multisequence.py:280:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GMMHMM(covars_prior=array([[[-1.5, -1.5, -1.5, -1.5],
[-1.5, -1.5, -1.5, -1.5]],
[[-1.5, -1.5, -1.5, -...n_components=2,
n_iter=100, n_mix=2, random_state=1234,
weights_prior=array([[1., 1.],
[1., 1.]]))
X = array([[0.00992058, 0.44151747, 0.5395124 , 0.40644765],
[0.00962487, 0.45613006, 0.52375835, 0.3899082 ],
...00656536, 0.39309287, 0.60035396, 0.41596898],
[0.00693208, 0.37821782, 0.59813255, 0.4394344 ]], dtype=float32)
def _fit_scaling(self, X):
frameprob = self._compute_likelihood(X)
> log_prob, fwdlattice, scaling_factors = _hmmc.forward_scaling(
self.startprob_, self.transmat_, frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:855: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
_ test_gmmhmm_multi_sequence_fit_invariant_to_sequence_ordering[scaling-spherical] _
covariance_type = 'spherical', implementation = 'scaling', init_params = 'mcw'
verbose = False
@pytest.mark.parametrize("covariance_type",
["diag", "spherical", "tied", "full"])
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_gmmhmm_multi_sequence_fit_invariant_to_sequence_ordering(
covariance_type, implementation, init_params='mcw', verbose=False
):
"""
Sanity check GMM-HMM fit behaviour when run on multiple sequences
aka multiple frames.
Training data consumed during GMM-HMM fit is packed into a single
array X containing one or more sequences. In the case where
there are two or more input sequences, the ordering that the
sequences are packed into X should not influence the results
of the fit. Major differences in convergence during EM
iterations by merely permuting sequence order in the input
indicates a likely defect in the fit implementation.
Note: the ordering of samples inside a given sequence
is very meaningful, permuting the order of samples would
destroy the the state transition structure in the input data.
See issue 410 on github:
https://github.com/hmmlearn/hmmlearn/issues/410
"""
sequence_data = EXAMPLE_SEQUENCES_ISSUE_410_PRUNED
scores = []
for p in make_permutations(sequence_data):
sequences = sequence_data[p]
X = np.concatenate(sequences)
lengths = [len(seq) for seq in sequences]
model = hmm.GMMHMM(
n_components=2,
n_mix=2,
n_iter=100,
covariance_type=covariance_type,
verbose=verbose,
init_params=init_params,
random_state=1234,
implementation=implementation
)
# don't use random parameters for testing
init = 1. / model.n_components
model.startprob_ = np.full(model.n_components, init)
model.transmat_ = \
np.full((model.n_components, model.n_components), init)
model.monitor_ = StrictMonitor(
model.monitor_.tol,
model.monitor_.n_iter,
model.monitor_.verbose,
)
> model.fit(X, lengths)
hmmlearn/tests/test_gmm_hmm_multisequence.py:280:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GMMHMM(covariance_type='spherical',
covars_prior=array([[-2., -2.],
[-2., -2.]]),
covars_weight=a...n_components=2,
n_iter=100, n_mix=2, random_state=1234,
weights_prior=array([[1., 1.],
[1., 1.]]))
X = array([[0.00992058, 0.44151747, 0.5395124 , 0.40644765],
[0.00962487, 0.45613006, 0.52375835, 0.3899082 ],
...00656536, 0.39309287, 0.60035396, 0.41596898],
[0.00693208, 0.37821782, 0.59813255, 0.4394344 ]], dtype=float32)
def _fit_scaling(self, X):
frameprob = self._compute_likelihood(X)
> log_prob, fwdlattice, scaling_factors = _hmmc.forward_scaling(
self.startprob_, self.transmat_, frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:855: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
_ test_gmmhmm_multi_sequence_fit_invariant_to_sequence_ordering[scaling-tied] __
covariance_type = 'tied', implementation = 'scaling', init_params = 'mcw'
verbose = False
@pytest.mark.parametrize("covariance_type",
["diag", "spherical", "tied", "full"])
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_gmmhmm_multi_sequence_fit_invariant_to_sequence_ordering(
covariance_type, implementation, init_params='mcw', verbose=False
):
"""
Sanity check GMM-HMM fit behaviour when run on multiple sequences
aka multiple frames.
Training data consumed during GMM-HMM fit is packed into a single
array X containing one or more sequences. In the case where
there are two or more input sequences, the ordering that the
sequences are packed into X should not influence the results
of the fit. Major differences in convergence during EM
iterations by merely permuting sequence order in the input
indicates a likely defect in the fit implementation.
Note: the ordering of samples inside a given sequence
is very meaningful, permuting the order of samples would
destroy the the state transition structure in the input data.
See issue 410 on github:
https://github.com/hmmlearn/hmmlearn/issues/410
"""
sequence_data = EXAMPLE_SEQUENCES_ISSUE_410_PRUNED
scores = []
for p in make_permutations(sequence_data):
sequences = sequence_data[p]
X = np.concatenate(sequences)
lengths = [len(seq) for seq in sequences]
model = hmm.GMMHMM(
n_components=2,
n_mix=2,
n_iter=100,
covariance_type=covariance_type,
verbose=verbose,
init_params=init_params,
random_state=1234,
implementation=implementation
)
# don't use random parameters for testing
init = 1. / model.n_components
model.startprob_ = np.full(model.n_components, init)
model.transmat_ = \
np.full((model.n_components, model.n_components), init)
model.monitor_ = StrictMonitor(
model.monitor_.tol,
model.monitor_.n_iter,
model.monitor_.verbose,
)
> model.fit(X, lengths)
hmmlearn/tests/test_gmm_hmm_multisequence.py:280:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GMMHMM(covariance_type='tied',
covars_prior=array([[[0., 0., 0., 0.],
[0., 0., 0., 0.],
[0., 0....n_components=2,
n_iter=100, n_mix=2, random_state=1234,
weights_prior=array([[1., 1.],
[1., 1.]]))
X = array([[0.00992058, 0.44151747, 0.5395124 , 0.40644765],
[0.00962487, 0.45613006, 0.52375835, 0.3899082 ],
...00656536, 0.39309287, 0.60035396, 0.41596898],
[0.00693208, 0.37821782, 0.59813255, 0.4394344 ]], dtype=float32)
def _fit_scaling(self, X):
frameprob = self._compute_likelihood(X)
> log_prob, fwdlattice, scaling_factors = _hmmc.forward_scaling(
self.startprob_, self.transmat_, frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:855: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
_ test_gmmhmm_multi_sequence_fit_invariant_to_sequence_ordering[scaling-full] __
covariance_type = 'full', implementation = 'scaling', init_params = 'mcw'
verbose = False
@pytest.mark.parametrize("covariance_type",
["diag", "spherical", "tied", "full"])
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_gmmhmm_multi_sequence_fit_invariant_to_sequence_ordering(
covariance_type, implementation, init_params='mcw', verbose=False
):
"""
Sanity check GMM-HMM fit behaviour when run on multiple sequences
aka multiple frames.
Training data consumed during GMM-HMM fit is packed into a single
array X containing one or more sequences. In the case where
there are two or more input sequences, the ordering that the
sequences are packed into X should not influence the results
of the fit. Major differences in convergence during EM
iterations by merely permuting sequence order in the input
indicates a likely defect in the fit implementation.
Note: the ordering of samples inside a given sequence
is very meaningful, permuting the order of samples would
destroy the the state transition structure in the input data.
See issue 410 on github:
https://github.com/hmmlearn/hmmlearn/issues/410
"""
sequence_data = EXAMPLE_SEQUENCES_ISSUE_410_PRUNED
scores = []
for p in make_permutations(sequence_data):
sequences = sequence_data[p]
X = np.concatenate(sequences)
lengths = [len(seq) for seq in sequences]
model = hmm.GMMHMM(
n_components=2,
n_mix=2,
n_iter=100,
covariance_type=covariance_type,
verbose=verbose,
init_params=init_params,
random_state=1234,
implementation=implementation
)
# don't use random parameters for testing
init = 1. / model.n_components
model.startprob_ = np.full(model.n_components, init)
model.transmat_ = \
np.full((model.n_components, model.n_components), init)
model.monitor_ = StrictMonitor(
model.monitor_.tol,
model.monitor_.n_iter,
model.monitor_.verbose,
)
> model.fit(X, lengths)
hmmlearn/tests/test_gmm_hmm_multisequence.py:280:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GMMHMM(covariance_type='full',
covars_prior=array([[[[0., 0., 0., 0.],
[0., 0., 0., 0.],
[0.,...n_components=2,
n_iter=100, n_mix=2, random_state=1234,
weights_prior=array([[1., 1.],
[1., 1.]]))
X = array([[0.00992058, 0.44151747, 0.5395124 , 0.40644765],
[0.00962487, 0.45613006, 0.52375835, 0.3899082 ],
...00656536, 0.39309287, 0.60035396, 0.41596898],
[0.00693208, 0.37821782, 0.59813255, 0.4394344 ]], dtype=float32)
def _fit_scaling(self, X):
frameprob = self._compute_likelihood(X)
> log_prob, fwdlattice, scaling_factors = _hmmc.forward_scaling(
self.startprob_, self.transmat_, frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:855: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
___ test_gmmhmm_multi_sequence_fit_invariant_to_sequence_ordering[log-diag] ____
covariance_type = 'diag', implementation = 'log', init_params = 'mcw'
verbose = False
@pytest.mark.parametrize("covariance_type",
["diag", "spherical", "tied", "full"])
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_gmmhmm_multi_sequence_fit_invariant_to_sequence_ordering(
covariance_type, implementation, init_params='mcw', verbose=False
):
"""
Sanity check GMM-HMM fit behaviour when run on multiple sequences
aka multiple frames.
Training data consumed during GMM-HMM fit is packed into a single
array X containing one or more sequences. In the case where
there are two or more input sequences, the ordering that the
sequences are packed into X should not influence the results
of the fit. Major differences in convergence during EM
iterations by merely permuting sequence order in the input
indicates a likely defect in the fit implementation.
Note: the ordering of samples inside a given sequence
is very meaningful, permuting the order of samples would
destroy the the state transition structure in the input data.
See issue 410 on github:
https://github.com/hmmlearn/hmmlearn/issues/410
"""
sequence_data = EXAMPLE_SEQUENCES_ISSUE_410_PRUNED
scores = []
for p in make_permutations(sequence_data):
sequences = sequence_data[p]
X = np.concatenate(sequences)
lengths = [len(seq) for seq in sequences]
model = hmm.GMMHMM(
n_components=2,
n_mix=2,
n_iter=100,
covariance_type=covariance_type,
verbose=verbose,
init_params=init_params,
random_state=1234,
implementation=implementation
)
# don't use random parameters for testing
init = 1. / model.n_components
model.startprob_ = np.full(model.n_components, init)
model.transmat_ = \
np.full((model.n_components, model.n_components), init)
model.monitor_ = StrictMonitor(
model.monitor_.tol,
model.monitor_.n_iter,
model.monitor_.verbose,
)
> model.fit(X, lengths)
hmmlearn/tests/test_gmm_hmm_multisequence.py:280:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GMMHMM(covars_prior=array([[[-1.5, -1.5, -1.5, -1.5],
[-1.5, -1.5, -1.5, -1.5]],
[[-1.5, -1.5, -1.5, -...n_components=2,
n_iter=100, n_mix=2, random_state=1234,
weights_prior=array([[1., 1.],
[1., 1.]]))
X = array([[0.00992058, 0.44151747, 0.5395124 , 0.40644765],
[0.00962487, 0.45613006, 0.52375835, 0.3899082 ],
...00656536, 0.39309287, 0.60035396, 0.41596898],
[0.00693208, 0.37821782, 0.59813255, 0.4394344 ]], dtype=float32)
def _fit_log(self, X):
log_frameprob = self._compute_log_likelihood(X)
> log_prob, fwdlattice = _hmmc.forward_log(
self.startprob_, self.transmat_, log_frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:864: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
_ test_gmmhmm_multi_sequence_fit_invariant_to_sequence_ordering[log-spherical] _
covariance_type = 'spherical', implementation = 'log', init_params = 'mcw'
verbose = False
@pytest.mark.parametrize("covariance_type",
["diag", "spherical", "tied", "full"])
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_gmmhmm_multi_sequence_fit_invariant_to_sequence_ordering(
covariance_type, implementation, init_params='mcw', verbose=False
):
"""
Sanity check GMM-HMM fit behaviour when run on multiple sequences
aka multiple frames.
Training data consumed during GMM-HMM fit is packed into a single
array X containing one or more sequences. In the case where
there are two or more input sequences, the ordering that the
sequences are packed into X should not influence the results
of the fit. Major differences in convergence during EM
iterations by merely permuting sequence order in the input
indicates a likely defect in the fit implementation.
Note: the ordering of samples inside a given sequence
is very meaningful, permuting the order of samples would
destroy the the state transition structure in the input data.
See issue 410 on github:
https://github.com/hmmlearn/hmmlearn/issues/410
"""
sequence_data = EXAMPLE_SEQUENCES_ISSUE_410_PRUNED
scores = []
for p in make_permutations(sequence_data):
sequences = sequence_data[p]
X = np.concatenate(sequences)
lengths = [len(seq) for seq in sequences]
model = hmm.GMMHMM(
n_components=2,
n_mix=2,
n_iter=100,
covariance_type=covariance_type,
verbose=verbose,
init_params=init_params,
random_state=1234,
implementation=implementation
)
# don't use random parameters for testing
init = 1. / model.n_components
model.startprob_ = np.full(model.n_components, init)
model.transmat_ = \
np.full((model.n_components, model.n_components), init)
model.monitor_ = StrictMonitor(
model.monitor_.tol,
model.monitor_.n_iter,
model.monitor_.verbose,
)
> model.fit(X, lengths)
hmmlearn/tests/test_gmm_hmm_multisequence.py:280:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GMMHMM(covariance_type='spherical',
covars_prior=array([[-2., -2.],
[-2., -2.]]),
covars_weight=a...n_components=2,
n_iter=100, n_mix=2, random_state=1234,
weights_prior=array([[1., 1.],
[1., 1.]]))
X = array([[0.00992058, 0.44151747, 0.5395124 , 0.40644765],
[0.00962487, 0.45613006, 0.52375835, 0.3899082 ],
...00656536, 0.39309287, 0.60035396, 0.41596898],
[0.00693208, 0.37821782, 0.59813255, 0.4394344 ]], dtype=float32)
def _fit_log(self, X):
log_frameprob = self._compute_log_likelihood(X)
> log_prob, fwdlattice = _hmmc.forward_log(
self.startprob_, self.transmat_, log_frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:864: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
___ test_gmmhmm_multi_sequence_fit_invariant_to_sequence_ordering[log-tied] ____
covariance_type = 'tied', implementation = 'log', init_params = 'mcw'
verbose = False
@pytest.mark.parametrize("covariance_type",
["diag", "spherical", "tied", "full"])
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_gmmhmm_multi_sequence_fit_invariant_to_sequence_ordering(
covariance_type, implementation, init_params='mcw', verbose=False
):
"""
Sanity check GMM-HMM fit behaviour when run on multiple sequences
aka multiple frames.
Training data consumed during GMM-HMM fit is packed into a single
array X containing one or more sequences. In the case where
there are two or more input sequences, the ordering that the
sequences are packed into X should not influence the results
of the fit. Major differences in convergence during EM
iterations by merely permuting sequence order in the input
indicates a likely defect in the fit implementation.
Note: the ordering of samples inside a given sequence
is very meaningful, permuting the order of samples would
destroy the the state transition structure in the input data.
See issue 410 on github:
https://github.com/hmmlearn/hmmlearn/issues/410
"""
sequence_data = EXAMPLE_SEQUENCES_ISSUE_410_PRUNED
scores = []
for p in make_permutations(sequence_data):
sequences = sequence_data[p]
X = np.concatenate(sequences)
lengths = [len(seq) for seq in sequences]
model = hmm.GMMHMM(
n_components=2,
n_mix=2,
n_iter=100,
covariance_type=covariance_type,
verbose=verbose,
init_params=init_params,
random_state=1234,
implementation=implementation
)
# don't use random parameters for testing
init = 1. / model.n_components
model.startprob_ = np.full(model.n_components, init)
model.transmat_ = \
np.full((model.n_components, model.n_components), init)
model.monitor_ = StrictMonitor(
model.monitor_.tol,
model.monitor_.n_iter,
model.monitor_.verbose,
)
> model.fit(X, lengths)
hmmlearn/tests/test_gmm_hmm_multisequence.py:280:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GMMHMM(covariance_type='tied',
covars_prior=array([[[0., 0., 0., 0.],
[0., 0., 0., 0.],
[0., 0....n_components=2,
n_iter=100, n_mix=2, random_state=1234,
weights_prior=array([[1., 1.],
[1., 1.]]))
X = array([[0.00992058, 0.44151747, 0.5395124 , 0.40644765],
[0.00962487, 0.45613006, 0.52375835, 0.3899082 ],
...00656536, 0.39309287, 0.60035396, 0.41596898],
[0.00693208, 0.37821782, 0.59813255, 0.4394344 ]], dtype=float32)
def _fit_log(self, X):
log_frameprob = self._compute_log_likelihood(X)
> log_prob, fwdlattice = _hmmc.forward_log(
self.startprob_, self.transmat_, log_frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:864: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
___ test_gmmhmm_multi_sequence_fit_invariant_to_sequence_ordering[log-full] ____
covariance_type = 'full', implementation = 'log', init_params = 'mcw'
verbose = False
@pytest.mark.parametrize("covariance_type",
["diag", "spherical", "tied", "full"])
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_gmmhmm_multi_sequence_fit_invariant_to_sequence_ordering(
covariance_type, implementation, init_params='mcw', verbose=False
):
"""
Sanity check GMM-HMM fit behaviour when run on multiple sequences
aka multiple frames.
Training data consumed during GMM-HMM fit is packed into a single
array X containing one or more sequences. In the case where
there are two or more input sequences, the ordering that the
sequences are packed into X should not influence the results
of the fit. Major differences in convergence during EM
iterations by merely permuting sequence order in the input
indicates a likely defect in the fit implementation.
Note: the ordering of samples inside a given sequence
is very meaningful, permuting the order of samples would
destroy the the state transition structure in the input data.
See issue 410 on github:
https://github.com/hmmlearn/hmmlearn/issues/410
"""
sequence_data = EXAMPLE_SEQUENCES_ISSUE_410_PRUNED
scores = []
for p in make_permutations(sequence_data):
sequences = sequence_data[p]
X = np.concatenate(sequences)
lengths = [len(seq) for seq in sequences]
model = hmm.GMMHMM(
n_components=2,
n_mix=2,
n_iter=100,
covariance_type=covariance_type,
verbose=verbose,
init_params=init_params,
random_state=1234,
implementation=implementation
)
# don't use random parameters for testing
init = 1. / model.n_components
model.startprob_ = np.full(model.n_components, init)
model.transmat_ = \
np.full((model.n_components, model.n_components), init)
model.monitor_ = StrictMonitor(
model.monitor_.tol,
model.monitor_.n_iter,
model.monitor_.verbose,
)
> model.fit(X, lengths)
hmmlearn/tests/test_gmm_hmm_multisequence.py:280:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GMMHMM(covariance_type='full',
covars_prior=array([[[[0., 0., 0., 0.],
[0., 0., 0., 0.],
[0.,...n_components=2,
n_iter=100, n_mix=2, random_state=1234,
weights_prior=array([[1., 1.],
[1., 1.]]))
X = array([[0.00992058, 0.44151747, 0.5395124 , 0.40644765],
[0.00962487, 0.45613006, 0.52375835, 0.3899082 ],
...00656536, 0.39309287, 0.60035396, 0.41596898],
[0.00693208, 0.37821782, 0.59813255, 0.4394344 ]], dtype=float32)
def _fit_log(self, X):
log_frameprob = self._compute_log_likelihood(X)
> log_prob, fwdlattice = _hmmc.forward_log(
self.startprob_, self.transmat_, log_frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:864: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
_____ TestGMMHMMWithSphericalCovars.test_score_samples_and_decode[scaling] _____
self = <hmmlearn.tests.test_gmm_hmm_new.TestGMMHMMWithSphericalCovars object at 0xffff794bd730>
implementation = 'scaling'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_score_samples_and_decode(self, implementation):
n_samples = 1000
h = self.new_hmm(implementation)
X, states = h.sample(n_samples)
> _ll, posteriors = h.score_samples(X)
hmmlearn/tests/test_gmm_hmm_new.py:121:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:201: in score_samples
return self._score(X, lengths, compute_posteriors=True)
hmmlearn/base.py:244: in _score
return impl(
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GMMHMM(covariance_type='spherical',
covars_prior=array([[-2., -2.],
[-2., -2.],
[-2., -2.]]),
...state=RandomState(MT19937) at 0xFFFF78E88F40,
weights_prior=array([[1., 1.],
[1., 1.],
[1., 1.]]))
X = array([[ 3.10434458, 4.41854888],
[ 5.6930133 , 2.79308255],
[34.40086102, 37.4658949 ],
...,
[ 3.70365171, 3.71508656],
[ 1.74345864, 3.15260967],
[ 6.91178766, 9.37996936]])
lengths = None
def _score_scaling(self, X, lengths=None, *, compute_posteriors):
log_prob = 0
sub_posteriors = [np.empty((0, self.n_components))]
for sub_X in _utils.split_X_lengths(X, lengths):
frameprob = self._compute_likelihood(sub_X)
> log_probij, fwdlattice, scaling_factors = _hmmc.forward_scaling(
self.startprob_, self.transmat_, frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:272: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
_______ TestGMMHMMWithSphericalCovars.test_score_samples_and_decode[log] _______
self = <hmmlearn.tests.test_gmm_hmm_new.TestGMMHMMWithSphericalCovars object at 0xffff794bd8e0>
implementation = 'log'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_score_samples_and_decode(self, implementation):
n_samples = 1000
h = self.new_hmm(implementation)
X, states = h.sample(n_samples)
> _ll, posteriors = h.score_samples(X)
hmmlearn/tests/test_gmm_hmm_new.py:121:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:201: in score_samples
return self._score(X, lengths, compute_posteriors=True)
hmmlearn/base.py:244: in _score
return impl(
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GMMHMM(covariance_type='spherical',
covars_prior=array([[-2., -2.],
[-2., -2.],
[-2., -2.]]),
...state=RandomState(MT19937) at 0xFFFF78F8BD40,
weights_prior=array([[1., 1.],
[1., 1.],
[1., 1.]]))
X = array([[ 3.10434458, 4.41854888],
[ 5.6930133 , 2.79308255],
[34.40086102, 37.4658949 ],
...,
[ 3.70365171, 3.71508656],
[ 1.74345864, 3.15260967],
[ 6.91178766, 9.37996936]])
lengths = None
def _score_log(self, X, lengths=None, *, compute_posteriors):
"""
Compute the log probability under the model, as well as posteriors if
*compute_posteriors* is True (otherwise, an empty array is returned
for the latter).
"""
log_prob = 0
sub_posteriors = [np.empty((0, self.n_components))]
for sub_X in _utils.split_X_lengths(X, lengths):
log_frameprob = self._compute_log_likelihood(sub_X)
> log_probij, fwdlattice = _hmmc.forward_log(
self.startprob_, self.transmat_, log_frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:257: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
_______________ TestGMMHMMWithSphericalCovars.test_fit[scaling] ________________
self = <hmmlearn.tests.test_gmm_hmm_new.TestGMMHMMWithSphericalCovars object at 0xffff794bda90>
implementation = 'scaling'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_fit(self, implementation):
n_iter = 5
n_samples = 1000
lengths = None
h = self.new_hmm(implementation)
X, _state_sequence = h.sample(n_samples)
# Mess up the parameters and see if we can re-learn them.
covs0, means0, priors0, trans0, weights0 = prep_params(
self.n_components, self.n_mix, self.n_features,
self.covariance_type, self.low, self.high,
np.random.RandomState(15)
)
h.covars_ = covs0 * 100
h.means_ = means0
h.startprob_ = priors0
h.transmat_ = trans0
h.weights_ = weights0
> assert_log_likelihood_increasing(h, X, lengths, n_iter)
hmmlearn/tests/test_gmm_hmm_new.py:146:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/tests/__init__.py:46: in assert_log_likelihood_increasing
h.fit(X, lengths=lengths)
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GMMHMM(covariance_type='spherical',
covars_prior=array([[-2., -2.],
[-2., -2.],
[-2., -2.]]),
...state=RandomState(MT19937) at 0xFFFF78EA4040,
weights_prior=array([[1., 1.],
[1., 1.],
[1., 1.]]))
X = array([[ 3.10434458, 4.41854888],
[ 5.6930133 , 2.79308255],
[34.40086102, 37.4658949 ],
...,
[ 3.70365171, 3.71508656],
[ 1.74345864, 3.15260967],
[ 6.91178766, 9.37996936]])
def _fit_scaling(self, X):
frameprob = self._compute_likelihood(X)
> log_prob, fwdlattice, scaling_factors = _hmmc.forward_scaling(
self.startprob_, self.transmat_, frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:855: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
_________________ TestGMMHMMWithSphericalCovars.test_fit[log] __________________
self = <hmmlearn.tests.test_gmm_hmm_new.TestGMMHMMWithSphericalCovars object at 0xffff794bdc10>
implementation = 'log'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_fit(self, implementation):
n_iter = 5
n_samples = 1000
lengths = None
h = self.new_hmm(implementation)
X, _state_sequence = h.sample(n_samples)
# Mess up the parameters and see if we can re-learn them.
covs0, means0, priors0, trans0, weights0 = prep_params(
self.n_components, self.n_mix, self.n_features,
self.covariance_type, self.low, self.high,
np.random.RandomState(15)
)
h.covars_ = covs0 * 100
h.means_ = means0
h.startprob_ = priors0
h.transmat_ = trans0
h.weights_ = weights0
> assert_log_likelihood_increasing(h, X, lengths, n_iter)
hmmlearn/tests/test_gmm_hmm_new.py:146:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/tests/__init__.py:46: in assert_log_likelihood_increasing
h.fit(X, lengths=lengths)
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GMMHMM(covariance_type='spherical',
covars_prior=array([[-2., -2.],
[-2., -2.],
[-2., -2.]]),
...state=RandomState(MT19937) at 0xFFFF78EA4D40,
weights_prior=array([[1., 1.],
[1., 1.],
[1., 1.]]))
X = array([[ 3.10434458, 4.41854888],
[ 5.6930133 , 2.79308255],
[34.40086102, 37.4658949 ],
...,
[ 3.70365171, 3.71508656],
[ 1.74345864, 3.15260967],
[ 6.91178766, 9.37996936]])
def _fit_log(self, X):
log_frameprob = self._compute_log_likelihood(X)
> log_prob, fwdlattice = _hmmc.forward_log(
self.startprob_, self.transmat_, log_frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:864: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
_________ TestGMMHMMWithSphericalCovars.test_fit_sparse_data[scaling] __________
self = <hmmlearn.tests.test_gmm_hmm_new.TestGMMHMMWithSphericalCovars object at 0xffff794bddf0>
implementation = 'scaling'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_fit_sparse_data(self, implementation):
n_samples = 1000
h = self.new_hmm(implementation)
h.means_ *= 1000 # this will put gaussians very far apart
X, _states = h.sample(n_samples)
# this should not raise
# "ValueError: array must not contain infs or NaNs"
h._init(X, [1000])
> h.fit(X)
hmmlearn/tests/test_gmm_hmm_new.py:158:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GMMHMM(covariance_type='spherical',
covars_prior=array([[-2., -2.],
[-2., -2.],
[-2., -2.]]),
...state=RandomState(MT19937) at 0xFFFF78F8AA40,
weights_prior=array([[1., 1.],
[1., 1.],
[1., 1.]]))
X = array([[ 3999.84865619, 3069.23303621],
[ 4002.43732491, 3067.60756989],
[34695.58244203, 37696.508278... [ 4000.44796333, 3068.5295739 ],
[ 3998.48777025, 3067.967097 ],
[ 6450.19377286, 7478.35563 ]])
def _fit_scaling(self, X):
frameprob = self._compute_likelihood(X)
> log_prob, fwdlattice, scaling_factors = _hmmc.forward_scaling(
self.startprob_, self.transmat_, frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:855: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
------------------------------ Captured log call -------------------------------
WARNING hmmlearn.base:base.py:514 Even though the 'startprob_' attribute is set, it will be overwritten during initialization because 'init_params' contains 's'
WARNING hmmlearn.base:base.py:514 Even though the 'transmat_' attribute is set, it will be overwritten during initialization because 'init_params' contains 't'
WARNING hmmlearn.base:base.py:514 Even though the 'weights_' attribute is set, it will be overwritten during initialization because 'init_params' contains 'w'
WARNING hmmlearn.base:base.py:514 Even though the 'means_' attribute is set, it will be overwritten during initialization because 'init_params' contains 'm'
WARNING hmmlearn.base:base.py:514 Even though the 'covars_' attribute is set, it will be overwritten during initialization because 'init_params' contains 'c'
WARNING hmmlearn.base:base.py:514 Even though the 'startprob_' attribute is set, it will be overwritten during initialization because 'init_params' contains 's'
WARNING hmmlearn.base:base.py:514 Even though the 'transmat_' attribute is set, it will be overwritten during initialization because 'init_params' contains 't'
WARNING hmmlearn.base:base.py:514 Even though the 'weights_' attribute is set, it will be overwritten during initialization because 'init_params' contains 'w'
WARNING hmmlearn.base:base.py:514 Even though the 'means_' attribute is set, it will be overwritten during initialization because 'init_params' contains 'm'
WARNING hmmlearn.base:base.py:514 Even though the 'covars_' attribute is set, it will be overwritten during initialization because 'init_params' contains 'c'
___________ TestGMMHMMWithSphericalCovars.test_fit_sparse_data[log] ____________
self = <hmmlearn.tests.test_gmm_hmm_new.TestGMMHMMWithSphericalCovars object at 0xffff794bdf70>
implementation = 'log'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_fit_sparse_data(self, implementation):
n_samples = 1000
h = self.new_hmm(implementation)
h.means_ *= 1000 # this will put gaussians very far apart
X, _states = h.sample(n_samples)
# this should not raise
# "ValueError: array must not contain infs or NaNs"
h._init(X, [1000])
> h.fit(X)
hmmlearn/tests/test_gmm_hmm_new.py:158:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GMMHMM(covariance_type='spherical',
covars_prior=array([[-2., -2.],
[-2., -2.],
[-2., -2.]]),
...state=RandomState(MT19937) at 0xFFFF78E88F40,
weights_prior=array([[1., 1.],
[1., 1.],
[1., 1.]]))
X = array([[ 3999.84865619, 3069.23303621],
[ 4002.43732491, 3067.60756989],
[34695.58244203, 37696.508278... [ 4000.44796333, 3068.5295739 ],
[ 3998.48777025, 3067.967097 ],
[ 6450.19377286, 7478.35563 ]])
def _fit_log(self, X):
log_frameprob = self._compute_log_likelihood(X)
> log_prob, fwdlattice = _hmmc.forward_log(
self.startprob_, self.transmat_, log_frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:864: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
------------------------------ Captured log call -------------------------------
WARNING hmmlearn.base:base.py:514 Even though the 'startprob_' attribute is set, it will be overwritten during initialization because 'init_params' contains 's'
WARNING hmmlearn.base:base.py:514 Even though the 'transmat_' attribute is set, it will be overwritten during initialization because 'init_params' contains 't'
WARNING hmmlearn.base:base.py:514 Even though the 'weights_' attribute is set, it will be overwritten during initialization because 'init_params' contains 'w'
WARNING hmmlearn.base:base.py:514 Even though the 'means_' attribute is set, it will be overwritten during initialization because 'init_params' contains 'm'
WARNING hmmlearn.base:base.py:514 Even though the 'covars_' attribute is set, it will be overwritten during initialization because 'init_params' contains 'c'
WARNING hmmlearn.base:base.py:514 Even though the 'startprob_' attribute is set, it will be overwritten during initialization because 'init_params' contains 's'
WARNING hmmlearn.base:base.py:514 Even though the 'transmat_' attribute is set, it will be overwritten during initialization because 'init_params' contains 't'
WARNING hmmlearn.base:base.py:514 Even though the 'weights_' attribute is set, it will be overwritten during initialization because 'init_params' contains 'w'
WARNING hmmlearn.base:base.py:514 Even though the 'means_' attribute is set, it will be overwritten during initialization because 'init_params' contains 'm'
WARNING hmmlearn.base:base.py:514 Even though the 'covars_' attribute is set, it will be overwritten during initialization because 'init_params' contains 'c'
____________ TestGMMHMMWithSphericalCovars.test_criterion[scaling] _____________
self = <hmmlearn.tests.test_gmm_hmm_new.TestGMMHMMWithSphericalCovars object at 0xffff794be450>
implementation = 'scaling'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_criterion(self, implementation):
random_state = check_random_state(2013)
m1 = self.new_hmm(implementation)
# Spread the means out to make this easier
m1.means_ *= 10
X, _ = m1.sample(4000, random_state=random_state)
aic = []
bic = []
ns = [2, 3, 4, 5]
for n in ns:
h = GMMHMM(n, n_mix=2, covariance_type=self.covariance_type,
random_state=random_state, implementation=implementation)
> h.fit(X)
hmmlearn/tests/test_gmm_hmm_new.py:194:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GMMHMM(covariance_type='spherical',
covars_prior=array([[-2., -2.],
[-2., -2.]]),
covars_weight=a...2,
random_state=RandomState(MT19937) at 0xFFFF78EA6B40,
weights_prior=array([[1., 1.],
[1., 1.]]))
X = array([[ 40.65624455, 29.16802829],
[242.52436414, 193.40204501],
[347.23482679, 377.48914412],
...,
[ 38.12816493, 29.67601719],
[241.05090945, 192.22538034],
[240.19846475, 193.26742897]])
def _fit_scaling(self, X):
frameprob = self._compute_likelihood(X)
> log_prob, fwdlattice, scaling_factors = _hmmc.forward_scaling(
self.startprob_, self.transmat_, frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:855: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
______________ TestGMMHMMWithSphericalCovars.test_criterion[log] _______________
self = <hmmlearn.tests.test_gmm_hmm_new.TestGMMHMMWithSphericalCovars object at 0xffff794be5d0>
implementation = 'log'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_criterion(self, implementation):
random_state = check_random_state(2013)
m1 = self.new_hmm(implementation)
# Spread the means out to make this easier
m1.means_ *= 10
X, _ = m1.sample(4000, random_state=random_state)
aic = []
bic = []
ns = [2, 3, 4, 5]
for n in ns:
h = GMMHMM(n, n_mix=2, covariance_type=self.covariance_type,
random_state=random_state, implementation=implementation)
> h.fit(X)
hmmlearn/tests/test_gmm_hmm_new.py:194:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GMMHMM(covariance_type='spherical',
covars_prior=array([[-2., -2.],
[-2., -2.]]),
covars_weight=a...2,
random_state=RandomState(MT19937) at 0xFFFF78EA7640,
weights_prior=array([[1., 1.],
[1., 1.]]))
X = array([[ 40.65624455, 29.16802829],
[242.52436414, 193.40204501],
[347.23482679, 377.48914412],
...,
[ 38.12816493, 29.67601719],
[241.05090945, 192.22538034],
[240.19846475, 193.26742897]])
def _fit_log(self, X):
log_frameprob = self._compute_log_likelihood(X)
> log_prob, fwdlattice = _hmmc.forward_log(
self.startprob_, self.transmat_, log_frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:864: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
_______ TestGMMHMMWithDiagCovars.test_score_samples_and_decode[scaling] ________
self = <hmmlearn.tests.test_gmm_hmm_new.TestGMMHMMWithDiagCovars object at 0xffff794bef00>
implementation = 'scaling'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_score_samples_and_decode(self, implementation):
n_samples = 1000
h = self.new_hmm(implementation)
X, states = h.sample(n_samples)
> _ll, posteriors = h.score_samples(X)
hmmlearn/tests/test_gmm_hmm_new.py:121:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:201: in score_samples
return self._score(X, lengths, compute_posteriors=True)
hmmlearn/base.py:244: in _score
return impl(
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GMMHMM(covars_prior=array([[[-1.5, -1.5],
[-1.5, -1.5]],
[[-1.5, -1.5],
[-1.5, -1.5]],
...state=RandomState(MT19937) at 0xFFFF78E89240,
weights_prior=array([[1., 1.],
[1., 1.],
[1., 1.]]))
X = array([[ 9.54552606, 7.2016523 ],
[28.42609795, 28.94775636],
[ 3.62062358, 2.11526678],
...,
[ 4.11095304, -1.71284803],
[ 6.91178766, 8.51698046],
[ 7.22860929, 6.57244198]])
lengths = None
def _score_scaling(self, X, lengths=None, *, compute_posteriors):
log_prob = 0
sub_posteriors = [np.empty((0, self.n_components))]
for sub_X in _utils.split_X_lengths(X, lengths):
frameprob = self._compute_likelihood(sub_X)
> log_probij, fwdlattice, scaling_factors = _hmmc.forward_scaling(
self.startprob_, self.transmat_, frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:272: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
_________ TestGMMHMMWithDiagCovars.test_score_samples_and_decode[log] __________
self = <hmmlearn.tests.test_gmm_hmm_new.TestGMMHMMWithDiagCovars object at 0xffff794bf0b0>
implementation = 'log'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_score_samples_and_decode(self, implementation):
n_samples = 1000
h = self.new_hmm(implementation)
X, states = h.sample(n_samples)
> _ll, posteriors = h.score_samples(X)
hmmlearn/tests/test_gmm_hmm_new.py:121:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:201: in score_samples
return self._score(X, lengths, compute_posteriors=True)
hmmlearn/base.py:244: in _score
return impl(
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GMMHMM(covars_prior=array([[[-1.5, -1.5],
[-1.5, -1.5]],
[[-1.5, -1.5],
[-1.5, -1.5]],
...state=RandomState(MT19937) at 0xFFFF78F8A340,
weights_prior=array([[1., 1.],
[1., 1.],
[1., 1.]]))
X = array([[ 9.54552606, 7.2016523 ],
[28.42609795, 28.94775636],
[ 3.62062358, 2.11526678],
...,
[ 4.11095304, -1.71284803],
[ 6.91178766, 8.51698046],
[ 7.22860929, 6.57244198]])
lengths = None
def _score_log(self, X, lengths=None, *, compute_posteriors):
"""
Compute the log probability under the model, as well as posteriors if
*compute_posteriors* is True (otherwise, an empty array is returned
for the latter).
"""
log_prob = 0
sub_posteriors = [np.empty((0, self.n_components))]
for sub_X in _utils.split_X_lengths(X, lengths):
log_frameprob = self._compute_log_likelihood(sub_X)
> log_probij, fwdlattice = _hmmc.forward_log(
self.startprob_, self.transmat_, log_frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:257: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
__________________ TestGMMHMMWithDiagCovars.test_fit[scaling] __________________
self = <hmmlearn.tests.test_gmm_hmm_new.TestGMMHMMWithDiagCovars object at 0xffff794bf2f0>
implementation = 'scaling'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_fit(self, implementation):
n_iter = 5
n_samples = 1000
lengths = None
h = self.new_hmm(implementation)
X, _state_sequence = h.sample(n_samples)
# Mess up the parameters and see if we can re-learn them.
covs0, means0, priors0, trans0, weights0 = prep_params(
self.n_components, self.n_mix, self.n_features,
self.covariance_type, self.low, self.high,
np.random.RandomState(15)
)
h.covars_ = covs0 * 100
h.means_ = means0
h.startprob_ = priors0
h.transmat_ = trans0
h.weights_ = weights0
> assert_log_likelihood_increasing(h, X, lengths, n_iter)
hmmlearn/tests/test_gmm_hmm_new.py:146:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/tests/__init__.py:46: in assert_log_likelihood_increasing
h.fit(X, lengths=lengths)
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GMMHMM(covars_prior=array([[[-1.5, -1.5],
[-1.5, -1.5]],
[[-1.5, -1.5],
[-1.5, -1.5]],
...state=RandomState(MT19937) at 0xFFFF78F89B40,
weights_prior=array([[1., 1.],
[1., 1.],
[1., 1.]]))
X = array([[ 9.54552606, 7.2016523 ],
[28.42609795, 28.94775636],
[ 3.62062358, 2.11526678],
...,
[ 4.11095304, -1.71284803],
[ 6.91178766, 8.51698046],
[ 7.22860929, 6.57244198]])
def _fit_scaling(self, X):
frameprob = self._compute_likelihood(X)
> log_prob, fwdlattice, scaling_factors = _hmmc.forward_scaling(
self.startprob_, self.transmat_, frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:855: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
____________________ TestGMMHMMWithDiagCovars.test_fit[log] ____________________
self = <hmmlearn.tests.test_gmm_hmm_new.TestGMMHMMWithDiagCovars object at 0xffff794bf4d0>
implementation = 'log'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_fit(self, implementation):
n_iter = 5
n_samples = 1000
lengths = None
h = self.new_hmm(implementation)
X, _state_sequence = h.sample(n_samples)
# Mess up the parameters and see if we can re-learn them.
covs0, means0, priors0, trans0, weights0 = prep_params(
self.n_components, self.n_mix, self.n_features,
self.covariance_type, self.low, self.high,
np.random.RandomState(15)
)
h.covars_ = covs0 * 100
h.means_ = means0
h.startprob_ = priors0
h.transmat_ = trans0
h.weights_ = weights0
> assert_log_likelihood_increasing(h, X, lengths, n_iter)
hmmlearn/tests/test_gmm_hmm_new.py:146:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/tests/__init__.py:46: in assert_log_likelihood_increasing
h.fit(X, lengths=lengths)
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GMMHMM(covars_prior=array([[[-1.5, -1.5],
[-1.5, -1.5]],
[[-1.5, -1.5],
[-1.5, -1.5]],
...state=RandomState(MT19937) at 0xFFFF78EA6E40,
weights_prior=array([[1., 1.],
[1., 1.],
[1., 1.]]))
X = array([[ 9.54552606, 7.2016523 ],
[28.42609795, 28.94775636],
[ 3.62062358, 2.11526678],
...,
[ 4.11095304, -1.71284803],
[ 6.91178766, 8.51698046],
[ 7.22860929, 6.57244198]])
def _fit_log(self, X):
log_frameprob = self._compute_log_likelihood(X)
> log_prob, fwdlattice = _hmmc.forward_log(
self.startprob_, self.transmat_, log_frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:864: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
____________ TestGMMHMMWithDiagCovars.test_fit_sparse_data[scaling] ____________
self = <hmmlearn.tests.test_gmm_hmm_new.TestGMMHMMWithDiagCovars object at 0xffff794bf740>
implementation = 'scaling'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_fit_sparse_data(self, implementation):
n_samples = 1000
h = self.new_hmm(implementation)
h.means_ *= 1000 # this will put gaussians very far apart
X, _states = h.sample(n_samples)
# this should not raise
# "ValueError: array must not contain infs or NaNs"
h._init(X, [1000])
> h.fit(X)
hmmlearn/tests/test_gmm_hmm_new.py:158:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GMMHMM(covars_prior=array([[[-1.5, -1.5],
[-1.5, -1.5]],
[[-1.5, -1.5],
[-1.5, -1.5]],
...state=RandomState(MT19937) at 0xFFFF78EA5B40,
weights_prior=array([[1., 1.],
[1., 1.],
[1., 1.]]))
X = array([[ 6452.82751127, 7476.17731294],
[29346.43680528, 29439.90571357],
[ 4000.36493519, 3066.929754... [ 4000.85526465, 3063.10163931],
[ 6450.19377286, 7477.49264111],
[ 6450.51059449, 7475.54810263]])
def _fit_scaling(self, X):
frameprob = self._compute_likelihood(X)
> log_prob, fwdlattice, scaling_factors = _hmmc.forward_scaling(
self.startprob_, self.transmat_, frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:855: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
------------------------------ Captured log call -------------------------------
WARNING hmmlearn.base:base.py:514 Even though the 'startprob_' attribute is set, it will be overwritten during initialization because 'init_params' contains 's'
WARNING hmmlearn.base:base.py:514 Even though the 'transmat_' attribute is set, it will be overwritten during initialization because 'init_params' contains 't'
WARNING hmmlearn.base:base.py:514 Even though the 'weights_' attribute is set, it will be overwritten during initialization because 'init_params' contains 'w'
WARNING hmmlearn.base:base.py:514 Even though the 'means_' attribute is set, it will be overwritten during initialization because 'init_params' contains 'm'
WARNING hmmlearn.base:base.py:514 Even though the 'covars_' attribute is set, it will be overwritten during initialization because 'init_params' contains 'c'
WARNING hmmlearn.base:base.py:514 Even though the 'startprob_' attribute is set, it will be overwritten during initialization because 'init_params' contains 's'
WARNING hmmlearn.base:base.py:514 Even though the 'transmat_' attribute is set, it will be overwritten during initialization because 'init_params' contains 't'
WARNING hmmlearn.base:base.py:514 Even though the 'weights_' attribute is set, it will be overwritten during initialization because 'init_params' contains 'w'
WARNING hmmlearn.base:base.py:514 Even though the 'means_' attribute is set, it will be overwritten during initialization because 'init_params' contains 'm'
WARNING hmmlearn.base:base.py:514 Even though the 'covars_' attribute is set, it will be overwritten during initialization because 'init_params' contains 'c'
______________ TestGMMHMMWithDiagCovars.test_fit_sparse_data[log] ______________
self = <hmmlearn.tests.test_gmm_hmm_new.TestGMMHMMWithDiagCovars object at 0xffff794bf8c0>
implementation = 'log'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_fit_sparse_data(self, implementation):
n_samples = 1000
h = self.new_hmm(implementation)
h.means_ *= 1000 # this will put gaussians very far apart
X, _states = h.sample(n_samples)
# this should not raise
# "ValueError: array must not contain infs or NaNs"
h._init(X, [1000])
> h.fit(X)
hmmlearn/tests/test_gmm_hmm_new.py:158:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GMMHMM(covars_prior=array([[[-1.5, -1.5],
[-1.5, -1.5]],
[[-1.5, -1.5],
[-1.5, -1.5]],
...state=RandomState(MT19937) at 0xFFFF78F8A540,
weights_prior=array([[1., 1.],
[1., 1.],
[1., 1.]]))
X = array([[ 6452.82751127, 7476.17731294],
[29346.43680528, 29439.90571357],
[ 4000.36493519, 3066.929754... [ 4000.85526465, 3063.10163931],
[ 6450.19377286, 7477.49264111],
[ 6450.51059449, 7475.54810263]])
def _fit_log(self, X):
log_frameprob = self._compute_log_likelihood(X)
> log_prob, fwdlattice = _hmmc.forward_log(
self.startprob_, self.transmat_, log_frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:864: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
------------------------------ Captured log call -------------------------------
WARNING hmmlearn.base:base.py:514 Even though the 'startprob_' attribute is set, it will be overwritten during initialization because 'init_params' contains 's'
WARNING hmmlearn.base:base.py:514 Even though the 'transmat_' attribute is set, it will be overwritten during initialization because 'init_params' contains 't'
WARNING hmmlearn.base:base.py:514 Even though the 'weights_' attribute is set, it will be overwritten during initialization because 'init_params' contains 'w'
WARNING hmmlearn.base:base.py:514 Even though the 'means_' attribute is set, it will be overwritten during initialization because 'init_params' contains 'm'
WARNING hmmlearn.base:base.py:514 Even though the 'covars_' attribute is set, it will be overwritten during initialization because 'init_params' contains 'c'
WARNING hmmlearn.base:base.py:514 Even though the 'startprob_' attribute is set, it will be overwritten during initialization because 'init_params' contains 's'
WARNING hmmlearn.base:base.py:514 Even though the 'transmat_' attribute is set, it will be overwritten during initialization because 'init_params' contains 't'
WARNING hmmlearn.base:base.py:514 Even though the 'weights_' attribute is set, it will be overwritten during initialization because 'init_params' contains 'w'
WARNING hmmlearn.base:base.py:514 Even though the 'means_' attribute is set, it will be overwritten during initialization because 'init_params' contains 'm'
WARNING hmmlearn.base:base.py:514 Even though the 'covars_' attribute is set, it will be overwritten during initialization because 'init_params' contains 'c'
_______________ TestGMMHMMWithDiagCovars.test_criterion[scaling] _______________
self = <hmmlearn.tests.test_gmm_hmm_new.TestGMMHMMWithDiagCovars object at 0xffff7946a780>
implementation = 'scaling'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_criterion(self, implementation):
random_state = check_random_state(2013)
m1 = self.new_hmm(implementation)
# Spread the means out to make this easier
m1.means_ *= 10
X, _ = m1.sample(4000, random_state=random_state)
aic = []
bic = []
ns = [2, 3, 4, 5]
for n in ns:
h = GMMHMM(n, n_mix=2, covariance_type=self.covariance_type,
random_state=random_state, implementation=implementation)
> h.fit(X)
hmmlearn/tests/test_gmm_hmm_new.py:194:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GMMHMM(covars_prior=array([[[-1.5, -1.5],
[-1.5, -1.5]],
[[-1.5, -1.5],
[-1.5, -1.5]]]),
...2,
random_state=RandomState(MT19937) at 0xFFFF78EA6F40,
weights_prior=array([[1., 1.],
[1., 1.]]))
X = array([[ 38.0423023 , 32.05291123],
[241.79570647, 194.37999672],
[294.33562038, 295.51745038],
...,
[ 61.05939775, 73.76171462],
[241.14804197, 192.17496613],
[241.72161057, 190.89927936]])
def _fit_scaling(self, X):
frameprob = self._compute_likelihood(X)
> log_prob, fwdlattice, scaling_factors = _hmmc.forward_scaling(
self.startprob_, self.transmat_, frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:855: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
_________________ TestGMMHMMWithDiagCovars.test_criterion[log] _________________
self = <hmmlearn.tests.test_gmm_hmm_new.TestGMMHMMWithDiagCovars object at 0xffff794b1940>
implementation = 'log'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_criterion(self, implementation):
random_state = check_random_state(2013)
m1 = self.new_hmm(implementation)
# Spread the means out to make this easier
m1.means_ *= 10
X, _ = m1.sample(4000, random_state=random_state)
aic = []
bic = []
ns = [2, 3, 4, 5]
for n in ns:
h = GMMHMM(n, n_mix=2, covariance_type=self.covariance_type,
random_state=random_state, implementation=implementation)
> h.fit(X)
hmmlearn/tests/test_gmm_hmm_new.py:194:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GMMHMM(covars_prior=array([[[-1.5, -1.5],
[-1.5, -1.5]],
[[-1.5, -1.5],
[-1.5, -1.5]]]),
...2,
random_state=RandomState(MT19937) at 0xFFFF78DE4140,
weights_prior=array([[1., 1.],
[1., 1.]]))
X = array([[ 38.0423023 , 32.05291123],
[241.79570647, 194.37999672],
[294.33562038, 295.51745038],
...,
[ 61.05939775, 73.76171462],
[241.14804197, 192.17496613],
[241.72161057, 190.89927936]])
def _fit_log(self, X):
log_frameprob = self._compute_log_likelihood(X)
> log_prob, fwdlattice = _hmmc.forward_log(
self.startprob_, self.transmat_, log_frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:864: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
_______ TestGMMHMMWithTiedCovars.test_score_samples_and_decode[scaling] ________
self = <hmmlearn.tests.test_gmm_hmm_new.TestGMMHMMWithTiedCovars object at 0xffff794b0fe0>
implementation = 'scaling'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_score_samples_and_decode(self, implementation):
n_samples = 1000
h = self.new_hmm(implementation)
X, states = h.sample(n_samples)
> _ll, posteriors = h.score_samples(X)
hmmlearn/tests/test_gmm_hmm_new.py:121:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:201: in score_samples
return self._score(X, lengths, compute_posteriors=True)
hmmlearn/base.py:244: in _score
return impl(
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GMMHMM(covariance_type='tied',
covars_prior=array([[[0., 0.],
[0., 0.]],
[[0., 0.],
[0....state=RandomState(MT19937) at 0xFFFF78DE4B40,
weights_prior=array([[1., 1.],
[1., 1.],
[1., 1.]]))
X = array([[ 4.82987073, 10.88003985],
[30.24211141, 29.22255983],
[ 4.42110145, 2.15841708],
...,
[ 6.15358777, -1.47582217],
[ 6.23813069, 7.99872158],
[ 6.0189001 , 8.3217492 ]])
lengths = None
def _score_scaling(self, X, lengths=None, *, compute_posteriors):
log_prob = 0
sub_posteriors = [np.empty((0, self.n_components))]
for sub_X in _utils.split_X_lengths(X, lengths):
frameprob = self._compute_likelihood(sub_X)
> log_probij, fwdlattice, scaling_factors = _hmmc.forward_scaling(
self.startprob_, self.transmat_, frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:272: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
_________ TestGMMHMMWithTiedCovars.test_score_samples_and_decode[log] __________
self = <hmmlearn.tests.test_gmm_hmm_new.TestGMMHMMWithTiedCovars object at 0xffff794bfa70>
implementation = 'log'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_score_samples_and_decode(self, implementation):
n_samples = 1000
h = self.new_hmm(implementation)
X, states = h.sample(n_samples)
> _ll, posteriors = h.score_samples(X)
hmmlearn/tests/test_gmm_hmm_new.py:121:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:201: in score_samples
return self._score(X, lengths, compute_posteriors=True)
hmmlearn/base.py:244: in _score
return impl(
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GMMHMM(covariance_type='tied',
covars_prior=array([[[0., 0.],
[0., 0.]],
[[0., 0.],
[0....state=RandomState(MT19937) at 0xFFFF78EA7D40,
weights_prior=array([[1., 1.],
[1., 1.],
[1., 1.]]))
X = array([[ 4.82987073, 10.88003985],
[30.24211141, 29.22255983],
[ 4.42110145, 2.15841708],
...,
[ 6.15358777, -1.47582217],
[ 6.23813069, 7.99872158],
[ 6.0189001 , 8.3217492 ]])
lengths = None
def _score_log(self, X, lengths=None, *, compute_posteriors):
"""
Compute the log probability under the model, as well as posteriors if
*compute_posteriors* is True (otherwise, an empty array is returned
for the latter).
"""
log_prob = 0
sub_posteriors = [np.empty((0, self.n_components))]
for sub_X in _utils.split_X_lengths(X, lengths):
log_frameprob = self._compute_log_likelihood(sub_X)
> log_probij, fwdlattice = _hmmc.forward_log(
self.startprob_, self.transmat_, log_frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:257: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
__________________ TestGMMHMMWithTiedCovars.test_fit[scaling] __________________
self = <hmmlearn.tests.test_gmm_hmm_new.TestGMMHMMWithTiedCovars object at 0xffff794bedb0>
implementation = 'scaling'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_fit(self, implementation):
n_iter = 5
n_samples = 1000
lengths = None
h = self.new_hmm(implementation)
X, _state_sequence = h.sample(n_samples)
# Mess up the parameters and see if we can re-learn them.
covs0, means0, priors0, trans0, weights0 = prep_params(
self.n_components, self.n_mix, self.n_features,
self.covariance_type, self.low, self.high,
np.random.RandomState(15)
)
h.covars_ = covs0 * 100
h.means_ = means0
h.startprob_ = priors0
h.transmat_ = trans0
h.weights_ = weights0
> assert_log_likelihood_increasing(h, X, lengths, n_iter)
hmmlearn/tests/test_gmm_hmm_new.py:146:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/tests/__init__.py:46: in assert_log_likelihood_increasing
h.fit(X, lengths=lengths)
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GMMHMM(covariance_type='tied',
covars_prior=array([[[0., 0.],
[0., 0.]],
[[0., 0.],
[0....state=RandomState(MT19937) at 0xFFFF78EA6640,
weights_prior=array([[1., 1.],
[1., 1.],
[1., 1.]]))
X = array([[ 4.82987073, 10.88003985],
[30.24211141, 29.22255983],
[ 4.42110145, 2.15841708],
...,
[ 6.15358777, -1.47582217],
[ 6.23813069, 7.99872158],
[ 6.0189001 , 8.3217492 ]])
def _fit_scaling(self, X):
frameprob = self._compute_likelihood(X)
> log_prob, fwdlattice, scaling_factors = _hmmc.forward_scaling(
self.startprob_, self.transmat_, frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:855: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
____________________ TestGMMHMMWithTiedCovars.test_fit[log] ____________________
self = <hmmlearn.tests.test_gmm_hmm_new.TestGMMHMMWithTiedCovars object at 0xffff794b1010>
implementation = 'log'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_fit(self, implementation):
n_iter = 5
n_samples = 1000
lengths = None
h = self.new_hmm(implementation)
X, _state_sequence = h.sample(n_samples)
# Mess up the parameters and see if we can re-learn them.
covs0, means0, priors0, trans0, weights0 = prep_params(
self.n_components, self.n_mix, self.n_features,
self.covariance_type, self.low, self.high,
np.random.RandomState(15)
)
h.covars_ = covs0 * 100
h.means_ = means0
h.startprob_ = priors0
h.transmat_ = trans0
h.weights_ = weights0
> assert_log_likelihood_increasing(h, X, lengths, n_iter)
hmmlearn/tests/test_gmm_hmm_new.py:146:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/tests/__init__.py:46: in assert_log_likelihood_increasing
h.fit(X, lengths=lengths)
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GMMHMM(covariance_type='tied',
covars_prior=array([[[0., 0.],
[0., 0.]],
[[0., 0.],
[0....state=RandomState(MT19937) at 0xFFFF78E8BA40,
weights_prior=array([[1., 1.],
[1., 1.],
[1., 1.]]))
X = array([[ 4.82987073, 10.88003985],
[30.24211141, 29.22255983],
[ 4.42110145, 2.15841708],
...,
[ 6.15358777, -1.47582217],
[ 6.23813069, 7.99872158],
[ 6.0189001 , 8.3217492 ]])
def _fit_log(self, X):
log_frameprob = self._compute_log_likelihood(X)
> log_prob, fwdlattice = _hmmc.forward_log(
self.startprob_, self.transmat_, log_frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:864: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
____________ TestGMMHMMWithTiedCovars.test_fit_sparse_data[scaling] ____________
self = <hmmlearn.tests.test_gmm_hmm_new.TestGMMHMMWithTiedCovars object at 0xffff794b2ed0>
implementation = 'scaling'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_fit_sparse_data(self, implementation):
n_samples = 1000
h = self.new_hmm(implementation)
h.means_ *= 1000 # this will put gaussians very far apart
X, _states = h.sample(n_samples)
# this should not raise
# "ValueError: array must not contain infs or NaNs"
h._init(X, [1000])
> h.fit(X)
hmmlearn/tests/test_gmm_hmm_new.py:158:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GMMHMM(covariance_type='tied',
covars_prior=array([[[0., 0.],
[0., 0.]],
[[0., 0.],
[0....state=RandomState(MT19937) at 0xFFFF78F8A340,
weights_prior=array([[1., 1.],
[1., 1.],
[1., 1.]]))
X = array([[ 6448.11185593, 7479.8557005 ],
[29348.25281874, 29440.18051704],
[ 4001.16541306, 3066.972904... [ 4002.89789938, 3063.33866516],
[ 6449.52011589, 7476.97438223],
[ 6449.3008853 , 7477.29740985]])
def _fit_scaling(self, X):
frameprob = self._compute_likelihood(X)
> log_prob, fwdlattice, scaling_factors = _hmmc.forward_scaling(
self.startprob_, self.transmat_, frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:855: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
------------------------------ Captured log call -------------------------------
WARNING hmmlearn.base:base.py:514 Even though the 'startprob_' attribute is set, it will be overwritten during initialization because 'init_params' contains 's'
WARNING hmmlearn.base:base.py:514 Even though the 'transmat_' attribute is set, it will be overwritten during initialization because 'init_params' contains 't'
WARNING hmmlearn.base:base.py:514 Even though the 'weights_' attribute is set, it will be overwritten during initialization because 'init_params' contains 'w'
WARNING hmmlearn.base:base.py:514 Even though the 'means_' attribute is set, it will be overwritten during initialization because 'init_params' contains 'm'
WARNING hmmlearn.base:base.py:514 Even though the 'covars_' attribute is set, it will be overwritten during initialization because 'init_params' contains 'c'
WARNING hmmlearn.base:base.py:514 Even though the 'startprob_' attribute is set, it will be overwritten during initialization because 'init_params' contains 's'
WARNING hmmlearn.base:base.py:514 Even though the 'transmat_' attribute is set, it will be overwritten during initialization because 'init_params' contains 't'
WARNING hmmlearn.base:base.py:514 Even though the 'weights_' attribute is set, it will be overwritten during initialization because 'init_params' contains 'w'
WARNING hmmlearn.base:base.py:514 Even though the 'means_' attribute is set, it will be overwritten during initialization because 'init_params' contains 'm'
WARNING hmmlearn.base:base.py:514 Even though the 'covars_' attribute is set, it will be overwritten during initialization because 'init_params' contains 'c'
______________ TestGMMHMMWithTiedCovars.test_fit_sparse_data[log] ______________
self = <hmmlearn.tests.test_gmm_hmm_new.TestGMMHMMWithTiedCovars object at 0xffff794b1250>
implementation = 'log'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_fit_sparse_data(self, implementation):
n_samples = 1000
h = self.new_hmm(implementation)
h.means_ *= 1000 # this will put gaussians very far apart
X, _states = h.sample(n_samples)
# this should not raise
# "ValueError: array must not contain infs or NaNs"
h._init(X, [1000])
> h.fit(X)
hmmlearn/tests/test_gmm_hmm_new.py:158:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GMMHMM(covariance_type='tied',
covars_prior=array([[[0., 0.],
[0., 0.]],
[[0., 0.],
[0....state=RandomState(MT19937) at 0xFFFF78DE4640,
weights_prior=array([[1., 1.],
[1., 1.],
[1., 1.]]))
X = array([[ 6448.11185593, 7479.8557005 ],
[29348.25281874, 29440.18051704],
[ 4001.16541306, 3066.972904... [ 4002.89789938, 3063.33866516],
[ 6449.52011589, 7476.97438223],
[ 6449.3008853 , 7477.29740985]])
def _fit_log(self, X):
log_frameprob = self._compute_log_likelihood(X)
> log_prob, fwdlattice = _hmmc.forward_log(
self.startprob_, self.transmat_, log_frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:864: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
------------------------------ Captured log call -------------------------------
WARNING hmmlearn.base:base.py:514 Even though the 'startprob_' attribute is set, it will be overwritten during initialization because 'init_params' contains 's'
WARNING hmmlearn.base:base.py:514 Even though the 'transmat_' attribute is set, it will be overwritten during initialization because 'init_params' contains 't'
WARNING hmmlearn.base:base.py:514 Even though the 'weights_' attribute is set, it will be overwritten during initialization because 'init_params' contains 'w'
WARNING hmmlearn.base:base.py:514 Even though the 'means_' attribute is set, it will be overwritten during initialization because 'init_params' contains 'm'
WARNING hmmlearn.base:base.py:514 Even though the 'covars_' attribute is set, it will be overwritten during initialization because 'init_params' contains 'c'
WARNING hmmlearn.base:base.py:514 Even though the 'startprob_' attribute is set, it will be overwritten during initialization because 'init_params' contains 's'
WARNING hmmlearn.base:base.py:514 Even though the 'transmat_' attribute is set, it will be overwritten during initialization because 'init_params' contains 't'
WARNING hmmlearn.base:base.py:514 Even though the 'weights_' attribute is set, it will be overwritten during initialization because 'init_params' contains 'w'
WARNING hmmlearn.base:base.py:514 Even though the 'means_' attribute is set, it will be overwritten during initialization because 'init_params' contains 'm'
WARNING hmmlearn.base:base.py:514 Even though the 'covars_' attribute is set, it will be overwritten during initialization because 'init_params' contains 'c'
_______________ TestGMMHMMWithTiedCovars.test_criterion[scaling] _______________
self = <hmmlearn.tests.test_gmm_hmm_new.TestGMMHMMWithTiedCovars object at 0xffff794b1850>
implementation = 'scaling'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_criterion(self, implementation):
random_state = check_random_state(2013)
m1 = self.new_hmm(implementation)
# Spread the means out to make this easier
m1.means_ *= 10
X, _ = m1.sample(4000, random_state=random_state)
aic = []
bic = []
ns = [2, 3, 4, 5]
for n in ns:
h = GMMHMM(n, n_mix=2, covariance_type=self.covariance_type,
random_state=random_state, implementation=implementation)
> h.fit(X)
hmmlearn/tests/test_gmm_hmm_new.py:194:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GMMHMM(covariance_type='tied',
covars_prior=array([[[0., 0.],
[0., 0.]],
[[0., 0.],
[0....2,
random_state=RandomState(MT19937) at 0xFFFF78EA5D40,
weights_prior=array([[1., 1.],
[1., 1.]]))
X = array([[ 39.3472143 , 31.96516627],
[239.4457031 , 193.9191015 ],
[292.652245 , 294.71067724],
...,
[ 66.25970996, 70.96753017],
[242.16534204, 192.32929203],
[243.78173717, 191.48640575]])
def _fit_scaling(self, X):
frameprob = self._compute_likelihood(X)
> log_prob, fwdlattice, scaling_factors = _hmmc.forward_scaling(
self.startprob_, self.transmat_, frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:855: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
_________________ TestGMMHMMWithTiedCovars.test_criterion[log] _________________
self = <hmmlearn.tests.test_gmm_hmm_new.TestGMMHMMWithTiedCovars object at 0xffff794b1a30>
implementation = 'log'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_criterion(self, implementation):
random_state = check_random_state(2013)
m1 = self.new_hmm(implementation)
# Spread the means out to make this easier
m1.means_ *= 10
X, _ = m1.sample(4000, random_state=random_state)
aic = []
bic = []
ns = [2, 3, 4, 5]
for n in ns:
h = GMMHMM(n, n_mix=2, covariance_type=self.covariance_type,
random_state=random_state, implementation=implementation)
> h.fit(X)
hmmlearn/tests/test_gmm_hmm_new.py:194:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GMMHMM(covariance_type='tied',
covars_prior=array([[[0., 0.],
[0., 0.]],
[[0., 0.],
[0....2,
random_state=RandomState(MT19937) at 0xFFFF78EA7140,
weights_prior=array([[1., 1.],
[1., 1.]]))
X = array([[ 39.3472143 , 31.96516627],
[239.4457031 , 193.9191015 ],
[292.652245 , 294.71067724],
...,
[ 66.25970996, 70.96753017],
[242.16534204, 192.32929203],
[243.78173717, 191.48640575]])
def _fit_log(self, X):
log_frameprob = self._compute_log_likelihood(X)
> log_prob, fwdlattice = _hmmc.forward_log(
self.startprob_, self.transmat_, log_frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:864: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
_______ TestGMMHMMWithFullCovars.test_score_samples_and_decode[scaling] ________
self = <hmmlearn.tests.test_gmm_hmm_new.TestGMMHMMWithFullCovars object at 0xffff794b2ae0>
implementation = 'scaling'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_score_samples_and_decode(self, implementation):
n_samples = 1000
h = self.new_hmm(implementation)
X, states = h.sample(n_samples)
> _ll, posteriors = h.score_samples(X)
hmmlearn/tests/test_gmm_hmm_new.py:121:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:201: in score_samples
return self._score(X, lengths, compute_posteriors=True)
hmmlearn/base.py:244: in _score
return impl(
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GMMHMM(covariance_type='full',
covars_prior=array([[[[0., 0.],
[0., 0.]],
[[0., 0.],
...state=RandomState(MT19937) at 0xFFFF78DE5440,
weights_prior=array([[1., 1.],
[1., 1.],
[1., 1.]]))
X = array([[ 5.79374863, 8.96492185],
[26.41560176, 18.9509068 ],
[15.10318903, 13.16898577],
...,
[ 8.84018693, 2.41627666],
[32.50086843, 27.24027875],
[ 4.04144414, 2.99516636]])
lengths = None
def _score_scaling(self, X, lengths=None, *, compute_posteriors):
log_prob = 0
sub_posteriors = [np.empty((0, self.n_components))]
for sub_X in _utils.split_X_lengths(X, lengths):
frameprob = self._compute_likelihood(sub_X)
> log_probij, fwdlattice, scaling_factors = _hmmc.forward_scaling(
self.startprob_, self.transmat_, frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:272: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
_________ TestGMMHMMWithFullCovars.test_score_samples_and_decode[log] __________
self = <hmmlearn.tests.test_gmm_hmm_new.TestGMMHMMWithFullCovars object at 0xffff794b2c60>
implementation = 'log'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_score_samples_and_decode(self, implementation):
n_samples = 1000
h = self.new_hmm(implementation)
X, states = h.sample(n_samples)
> _ll, posteriors = h.score_samples(X)
hmmlearn/tests/test_gmm_hmm_new.py:121:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:201: in score_samples
return self._score(X, lengths, compute_posteriors=True)
hmmlearn/base.py:244: in _score
return impl(
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GMMHMM(covariance_type='full',
covars_prior=array([[[[0., 0.],
[0., 0.]],
[[0., 0.],
...state=RandomState(MT19937) at 0xFFFF78DE5D40,
weights_prior=array([[1., 1.],
[1., 1.],
[1., 1.]]))
X = array([[ 5.79374863, 8.96492185],
[26.41560176, 18.9509068 ],
[15.10318903, 13.16898577],
...,
[ 8.84018693, 2.41627666],
[32.50086843, 27.24027875],
[ 4.04144414, 2.99516636]])
lengths = None
def _score_log(self, X, lengths=None, *, compute_posteriors):
"""
Compute the log probability under the model, as well as posteriors if
*compute_posteriors* is True (otherwise, an empty array is returned
for the latter).
"""
log_prob = 0
sub_posteriors = [np.empty((0, self.n_components))]
for sub_X in _utils.split_X_lengths(X, lengths):
log_frameprob = self._compute_log_likelihood(sub_X)
> log_probij, fwdlattice = _hmmc.forward_log(
self.startprob_, self.transmat_, log_frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:257: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
__________________ TestGMMHMMWithFullCovars.test_fit[scaling] __________________
self = <hmmlearn.tests.test_gmm_hmm_new.TestGMMHMMWithFullCovars object at 0xffff794b2ea0>
implementation = 'scaling'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_fit(self, implementation):
n_iter = 5
n_samples = 1000
lengths = None
h = self.new_hmm(implementation)
X, _state_sequence = h.sample(n_samples)
# Mess up the parameters and see if we can re-learn them.
covs0, means0, priors0, trans0, weights0 = prep_params(
self.n_components, self.n_mix, self.n_features,
self.covariance_type, self.low, self.high,
np.random.RandomState(15)
)
h.covars_ = covs0 * 100
h.means_ = means0
h.startprob_ = priors0
h.transmat_ = trans0
h.weights_ = weights0
> assert_log_likelihood_increasing(h, X, lengths, n_iter)
hmmlearn/tests/test_gmm_hmm_new.py:146:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/tests/__init__.py:46: in assert_log_likelihood_increasing
h.fit(X, lengths=lengths)
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GMMHMM(covariance_type='full',
covars_prior=array([[[[0., 0.],
[0., 0.]],
[[0., 0.],
...state=RandomState(MT19937) at 0xFFFF790B9B40,
weights_prior=array([[1., 1.],
[1., 1.],
[1., 1.]]))
X = array([[ 5.79374863, 8.96492185],
[26.41560176, 18.9509068 ],
[15.10318903, 13.16898577],
...,
[ 8.84018693, 2.41627666],
[32.50086843, 27.24027875],
[ 4.04144414, 2.99516636]])
def _fit_scaling(self, X):
frameprob = self._compute_likelihood(X)
> log_prob, fwdlattice, scaling_factors = _hmmc.forward_scaling(
self.startprob_, self.transmat_, frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:855: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
____________________ TestGMMHMMWithFullCovars.test_fit[log] ____________________
self = <hmmlearn.tests.test_gmm_hmm_new.TestGMMHMMWithFullCovars object at 0xffff794b3080>
implementation = 'log'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_fit(self, implementation):
n_iter = 5
n_samples = 1000
lengths = None
h = self.new_hmm(implementation)
X, _state_sequence = h.sample(n_samples)
# Mess up the parameters and see if we can re-learn them.
covs0, means0, priors0, trans0, weights0 = prep_params(
self.n_components, self.n_mix, self.n_features,
self.covariance_type, self.low, self.high,
np.random.RandomState(15)
)
h.covars_ = covs0 * 100
h.means_ = means0
h.startprob_ = priors0
h.transmat_ = trans0
h.weights_ = weights0
> assert_log_likelihood_increasing(h, X, lengths, n_iter)
hmmlearn/tests/test_gmm_hmm_new.py:146:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/tests/__init__.py:46: in assert_log_likelihood_increasing
h.fit(X, lengths=lengths)
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GMMHMM(covariance_type='full',
covars_prior=array([[[[0., 0.],
[0., 0.]],
[[0., 0.],
...state=RandomState(MT19937) at 0xFFFF78EA5940,
weights_prior=array([[1., 1.],
[1., 1.],
[1., 1.]]))
X = array([[ 5.79374863, 8.96492185],
[26.41560176, 18.9509068 ],
[15.10318903, 13.16898577],
...,
[ 8.84018693, 2.41627666],
[32.50086843, 27.24027875],
[ 4.04144414, 2.99516636]])
def _fit_log(self, X):
log_frameprob = self._compute_log_likelihood(X)
> log_prob, fwdlattice = _hmmc.forward_log(
self.startprob_, self.transmat_, log_frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:864: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
____________ TestGMMHMMWithFullCovars.test_fit_sparse_data[scaling] ____________
self = <hmmlearn.tests.test_gmm_hmm_new.TestGMMHMMWithFullCovars object at 0xffff794b32f0>
implementation = 'scaling'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_fit_sparse_data(self, implementation):
n_samples = 1000
h = self.new_hmm(implementation)
h.means_ *= 1000 # this will put gaussians very far apart
X, _states = h.sample(n_samples)
# this should not raise
# "ValueError: array must not contain infs or NaNs"
h._init(X, [1000])
> h.fit(X)
hmmlearn/tests/test_gmm_hmm_new.py:158:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GMMHMM(covariance_type='full',
covars_prior=array([[[[0., 0.],
[0., 0.]],
[[0., 0.],
...state=RandomState(MT19937) at 0xFFFF78EA7340,
weights_prior=array([[1., 1.],
[1., 1.],
[1., 1.]]))
X = array([[ 6449.07573383, 7477.94058249],
[24146.71996085, 19276.07031622],
[17479.42387006, 13924.043632... [ 6452.12217213, 7471.39193731],
[29350.51157576, 29438.19823596],
[ 4000.78575575, 3067.80965369]])
def _fit_scaling(self, X):
frameprob = self._compute_likelihood(X)
> log_prob, fwdlattice, scaling_factors = _hmmc.forward_scaling(
self.startprob_, self.transmat_, frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:855: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
------------------------------ Captured log call -------------------------------
WARNING hmmlearn.base:base.py:514 Even though the 'startprob_' attribute is set, it will be overwritten during initialization because 'init_params' contains 's'
WARNING hmmlearn.base:base.py:514 Even though the 'transmat_' attribute is set, it will be overwritten during initialization because 'init_params' contains 't'
WARNING hmmlearn.base:base.py:514 Even though the 'weights_' attribute is set, it will be overwritten during initialization because 'init_params' contains 'w'
WARNING hmmlearn.base:base.py:514 Even though the 'means_' attribute is set, it will be overwritten during initialization because 'init_params' contains 'm'
WARNING hmmlearn.base:base.py:514 Even though the 'covars_' attribute is set, it will be overwritten during initialization because 'init_params' contains 'c'
WARNING hmmlearn.base:base.py:514 Even though the 'startprob_' attribute is set, it will be overwritten during initialization because 'init_params' contains 's'
WARNING hmmlearn.base:base.py:514 Even though the 'transmat_' attribute is set, it will be overwritten during initialization because 'init_params' contains 't'
WARNING hmmlearn.base:base.py:514 Even though the 'weights_' attribute is set, it will be overwritten during initialization because 'init_params' contains 'w'
WARNING hmmlearn.base:base.py:514 Even though the 'means_' attribute is set, it will be overwritten during initialization because 'init_params' contains 'm'
WARNING hmmlearn.base:base.py:514 Even though the 'covars_' attribute is set, it will be overwritten during initialization because 'init_params' contains 'c'
______________ TestGMMHMMWithFullCovars.test_fit_sparse_data[log] ______________
self = <hmmlearn.tests.test_gmm_hmm_new.TestGMMHMMWithFullCovars object at 0xffff794b3470>
implementation = 'log'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_fit_sparse_data(self, implementation):
n_samples = 1000
h = self.new_hmm(implementation)
h.means_ *= 1000 # this will put gaussians very far apart
X, _states = h.sample(n_samples)
# this should not raise
# "ValueError: array must not contain infs or NaNs"
h._init(X, [1000])
> h.fit(X)
hmmlearn/tests/test_gmm_hmm_new.py:158:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GMMHMM(covariance_type='full',
covars_prior=array([[[[0., 0.],
[0., 0.]],
[[0., 0.],
...state=RandomState(MT19937) at 0xFFFF78EA5340,
weights_prior=array([[1., 1.],
[1., 1.],
[1., 1.]]))
X = array([[ 6449.07573383, 7477.94058249],
[24146.71996085, 19276.07031622],
[17479.42387006, 13924.043632... [ 6452.12217213, 7471.39193731],
[29350.51157576, 29438.19823596],
[ 4000.78575575, 3067.80965369]])
def _fit_log(self, X):
log_frameprob = self._compute_log_likelihood(X)
> log_prob, fwdlattice = _hmmc.forward_log(
self.startprob_, self.transmat_, log_frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:864: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
------------------------------ Captured log call -------------------------------
WARNING hmmlearn.base:base.py:514 Even though the 'startprob_' attribute is set, it will be overwritten during initialization because 'init_params' contains 's'
WARNING hmmlearn.base:base.py:514 Even though the 'transmat_' attribute is set, it will be overwritten during initialization because 'init_params' contains 't'
WARNING hmmlearn.base:base.py:514 Even though the 'weights_' attribute is set, it will be overwritten during initialization because 'init_params' contains 'w'
WARNING hmmlearn.base:base.py:514 Even though the 'means_' attribute is set, it will be overwritten during initialization because 'init_params' contains 'm'
WARNING hmmlearn.base:base.py:514 Even though the 'covars_' attribute is set, it will be overwritten during initialization because 'init_params' contains 'c'
WARNING hmmlearn.base:base.py:514 Even though the 'startprob_' attribute is set, it will be overwritten during initialization because 'init_params' contains 's'
WARNING hmmlearn.base:base.py:514 Even though the 'transmat_' attribute is set, it will be overwritten during initialization because 'init_params' contains 't'
WARNING hmmlearn.base:base.py:514 Even though the 'weights_' attribute is set, it will be overwritten during initialization because 'init_params' contains 'w'
WARNING hmmlearn.base:base.py:514 Even though the 'means_' attribute is set, it will be overwritten during initialization because 'init_params' contains 'm'
WARNING hmmlearn.base:base.py:514 Even though the 'covars_' attribute is set, it will be overwritten during initialization because 'init_params' contains 'c'
_______________ TestGMMHMMWithFullCovars.test_criterion[scaling] _______________
self = <hmmlearn.tests.test_gmm_hmm_new.TestGMMHMMWithFullCovars object at 0xffff7949af90>
implementation = 'scaling'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_criterion(self, implementation):
random_state = check_random_state(2013)
m1 = self.new_hmm(implementation)
# Spread the means out to make this easier
m1.means_ *= 10
X, _ = m1.sample(4000, random_state=random_state)
aic = []
bic = []
ns = [2, 3, 4, 5]
for n in ns:
h = GMMHMM(n, n_mix=2, covariance_type=self.covariance_type,
random_state=random_state, implementation=implementation)
> h.fit(X)
hmmlearn/tests/test_gmm_hmm_new.py:194:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GMMHMM(covariance_type='full',
covars_prior=array([[[[0., 0.],
[0., 0.]],
[[0., 0.],
...2,
random_state=RandomState(MT19937) at 0xFFFF78EA4D40,
weights_prior=array([[1., 1.],
[1., 1.]]))
X = array([[ 38.38394272, 31.47161592],
[173.50537726, 139.44913251],
[345.97120338, 377.84504473],
...,
[ 66.25970996, 70.96753017],
[175.32456372, 138.9877489 ],
[243.56689381, 192.53130439]])
def _fit_scaling(self, X):
frameprob = self._compute_likelihood(X)
> log_prob, fwdlattice, scaling_factors = _hmmc.forward_scaling(
self.startprob_, self.transmat_, frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:855: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
_________________ TestGMMHMMWithFullCovars.test_criterion[log] _________________
self = <hmmlearn.tests.test_gmm_hmm_new.TestGMMHMMWithFullCovars object at 0xffff794b31a0>
implementation = 'log'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_criterion(self, implementation):
random_state = check_random_state(2013)
m1 = self.new_hmm(implementation)
# Spread the means out to make this easier
m1.means_ *= 10
X, _ = m1.sample(4000, random_state=random_state)
aic = []
bic = []
ns = [2, 3, 4, 5]
for n in ns:
h = GMMHMM(n, n_mix=2, covariance_type=self.covariance_type,
random_state=random_state, implementation=implementation)
> h.fit(X)
hmmlearn/tests/test_gmm_hmm_new.py:194:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GMMHMM(covariance_type='full',
covars_prior=array([[[[0., 0.],
[0., 0.]],
[[0., 0.],
...2,
random_state=RandomState(MT19937) at 0xFFFF78EA7940,
weights_prior=array([[1., 1.],
[1., 1.]]))
X = array([[ 38.38394272, 31.47161592],
[173.50537726, 139.44913251],
[345.97120338, 377.84504473],
...,
[ 66.25970996, 70.96753017],
[175.32456372, 138.9877489 ],
[243.56689381, 192.53130439]])
def _fit_log(self, X):
log_frameprob = self._compute_log_likelihood(X)
> log_prob, fwdlattice = _hmmc.forward_log(
self.startprob_, self.transmat_, log_frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:864: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
__________________ TestGMMHMM_KmeansInit.test_kmeans[scaling] __________________
self = <hmmlearn.tests.test_gmm_hmm_new.TestGMMHMM_KmeansInit object at 0xffff794b2210>
implementation = 'scaling'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_kmeans(self, implementation):
# Generate two isolated cluster.
# The second cluster has no. of points less than n_mix.
np.random.seed(0)
data1 = np.random.uniform(low=0, high=1, size=(100, 2))
data2 = np.random.uniform(low=5, high=6, size=(5, 2))
data = np.r_[data1, data2]
model = GMMHMM(n_components=2, n_mix=10, n_iter=5,
implementation=implementation)
> model.fit(data) # _init() should not fail here
hmmlearn/tests/test_gmm_hmm_new.py:232:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GMMHMM(covars_prior=array([[[-1.5, -1.5],
[-1.5, -1.5],
[-1.5, -1.5],
[-1.5, -1.5],
[-... weights_prior=array([[1., 1., 1., 1., 1., 1., 1., 1., 1., 1.],
[1., 1., 1., 1., 1., 1., 1., 1., 1., 1.]]))
X = array([[5.48813504e-01, 7.15189366e-01],
[6.02763376e-01, 5.44883183e-01],
[4.23654799e-01, 6.45894113e-... [5.02467873e+00, 5.06724963e+00],
[5.67939277e+00, 5.45369684e+00],
[5.53657921e+00, 5.89667129e+00]])
def _fit_scaling(self, X):
frameprob = self._compute_likelihood(X)
> log_prob, fwdlattice, scaling_factors = _hmmc.forward_scaling(
self.startprob_, self.transmat_, frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:855: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
____________________ TestGMMHMM_KmeansInit.test_kmeans[log] ____________________
self = <hmmlearn.tests.test_gmm_hmm_new.TestGMMHMM_KmeansInit object at 0xffff794b13a0>
implementation = 'log'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_kmeans(self, implementation):
# Generate two isolated cluster.
# The second cluster has no. of points less than n_mix.
np.random.seed(0)
data1 = np.random.uniform(low=0, high=1, size=(100, 2))
data2 = np.random.uniform(low=5, high=6, size=(5, 2))
data = np.r_[data1, data2]
model = GMMHMM(n_components=2, n_mix=10, n_iter=5,
implementation=implementation)
> model.fit(data) # _init() should not fail here
hmmlearn/tests/test_gmm_hmm_new.py:232:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GMMHMM(covars_prior=array([[[-1.5, -1.5],
[-1.5, -1.5],
[-1.5, -1.5],
[-1.5, -1.5],
[-... weights_prior=array([[1., 1., 1., 1., 1., 1., 1., 1., 1., 1.],
[1., 1., 1., 1., 1., 1., 1., 1., 1., 1.]]))
X = array([[5.48813504e-01, 7.15189366e-01],
[6.02763376e-01, 5.44883183e-01],
[4.23654799e-01, 6.45894113e-... [5.02467873e+00, 5.06724963e+00],
[5.67939277e+00, 5.45369684e+00],
[5.53657921e+00, 5.89667129e+00]])
def _fit_log(self, X):
log_frameprob = self._compute_log_likelihood(X)
> log_prob, fwdlattice = _hmmc.forward_log(
self.startprob_, self.transmat_, log_frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:864: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
_________________ TestGMMHMM_MultiSequence.test_chunked[diag] __________________
sellf = <hmmlearn.tests.test_gmm_hmm_new.TestGMMHMM_MultiSequence object at 0xffff794b3860>
covtype = 'diag', init_params = 'mcw'
@pytest.mark.parametrize("covtype",
["diag", "spherical", "tied", "full"])
def test_chunked(sellf, covtype, init_params='mcw'):
np.random.seed(0)
gmm = create_random_gmm(3, 2, covariance_type=covtype, prng=0)
gmm.covariances_ = gmm.covars_
data = gmm.sample(n_samples=1000)[0]
model1 = GMMHMM(n_components=3, n_mix=2, covariance_type=covtype,
random_state=1, init_params=init_params)
model2 = GMMHMM(n_components=3, n_mix=2, covariance_type=covtype,
random_state=1, init_params=init_params)
# don't use random parameters for testing
init = 1. / model1.n_components
for model in (model1, model2):
model.startprob_ = np.full(model.n_components, init)
model.transmat_ = \
np.full((model.n_components, model.n_components), init)
> model1.fit(data)
hmmlearn/tests/test_gmm_hmm_new.py:259:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GMMHMM(covars_prior=array([[[-1.5, -1.5],
[-1.5, -1.5]],
[[-1.5, -1.5],
[-1.5, -1.5]],
... n_components=3, n_mix=2, random_state=1,
weights_prior=array([[1., 1.],
[1., 1.],
[1., 1.]]))
X = array([[-19.97769034, -16.75056455],
[-19.88212945, -16.97913043],
[-19.93125386, -16.94276853],
...,
[-11.01150478, -1.11584774],
[-11.10973308, -1.07914205],
[-10.8998337 , -0.84707255]])
def _fit_log(self, X):
log_frameprob = self._compute_log_likelihood(X)
> log_prob, fwdlattice = _hmmc.forward_log(
self.startprob_, self.transmat_, log_frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:864: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
_______________ TestGMMHMM_MultiSequence.test_chunked[spherical] _______________
sellf = <hmmlearn.tests.test_gmm_hmm_new.TestGMMHMM_MultiSequence object at 0xffff794b3bf0>
covtype = 'spherical', init_params = 'mcw'
@pytest.mark.parametrize("covtype",
["diag", "spherical", "tied", "full"])
def test_chunked(sellf, covtype, init_params='mcw'):
np.random.seed(0)
gmm = create_random_gmm(3, 2, covariance_type=covtype, prng=0)
gmm.covariances_ = gmm.covars_
data = gmm.sample(n_samples=1000)[0]
model1 = GMMHMM(n_components=3, n_mix=2, covariance_type=covtype,
random_state=1, init_params=init_params)
model2 = GMMHMM(n_components=3, n_mix=2, covariance_type=covtype,
random_state=1, init_params=init_params)
# don't use random parameters for testing
init = 1. / model1.n_components
for model in (model1, model2):
model.startprob_ = np.full(model.n_components, init)
model.transmat_ = \
np.full((model.n_components, model.n_components), init)
> model1.fit(data)
hmmlearn/tests/test_gmm_hmm_new.py:259:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GMMHMM(covariance_type='spherical',
covars_prior=array([[-2., -2.],
[-2., -2.],
[-2., -2.]]),
... n_components=3, n_mix=2, random_state=1,
weights_prior=array([[1., 1.],
[1., 1.],
[1., 1.]]))
X = array([[-19.80390185, -17.07835084],
[-19.60579587, -16.83260239],
[-19.92498908, -16.91030194],
...,
[-11.17392582, -1.26966434],
[-11.14220209, -1.03192961],
[-11.14814372, -0.99298261]])
def _fit_log(self, X):
log_frameprob = self._compute_log_likelihood(X)
> log_prob, fwdlattice = _hmmc.forward_log(
self.startprob_, self.transmat_, log_frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:864: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
_________________ TestGMMHMM_MultiSequence.test_chunked[tied] __________________
sellf = <hmmlearn.tests.test_gmm_hmm_new.TestGMMHMM_MultiSequence object at 0xffff794b3d10>
covtype = 'tied', init_params = 'mcw'
@pytest.mark.parametrize("covtype",
["diag", "spherical", "tied", "full"])
def test_chunked(sellf, covtype, init_params='mcw'):
np.random.seed(0)
gmm = create_random_gmm(3, 2, covariance_type=covtype, prng=0)
gmm.covariances_ = gmm.covars_
data = gmm.sample(n_samples=1000)[0]
model1 = GMMHMM(n_components=3, n_mix=2, covariance_type=covtype,
random_state=1, init_params=init_params)
model2 = GMMHMM(n_components=3, n_mix=2, covariance_type=covtype,
random_state=1, init_params=init_params)
# don't use random parameters for testing
init = 1. / model1.n_components
for model in (model1, model2):
model.startprob_ = np.full(model.n_components, init)
model.transmat_ = \
np.full((model.n_components, model.n_components), init)
> model1.fit(data)
hmmlearn/tests/test_gmm_hmm_new.py:259:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GMMHMM(covariance_type='tied',
covars_prior=array([[[0., 0.],
[0., 0.]],
[[0., 0.],
[0.... n_components=3, n_mix=2, random_state=1,
weights_prior=array([[1., 1.],
[1., 1.],
[1., 1.]]))
X = array([[-20.22761614, -15.84567719],
[-21.23619726, -16.89659692],
[-20.71982474, -16.73140459],
...,
[-10.87180439, -1.55878592],
[ -9.74956046, -1.38825752],
[-12.13924424, -0.25692342]])
def _fit_log(self, X):
log_frameprob = self._compute_log_likelihood(X)
> log_prob, fwdlattice = _hmmc.forward_log(
self.startprob_, self.transmat_, log_frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:864: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
_________________ TestGMMHMM_MultiSequence.test_chunked[full] __________________
sellf = <hmmlearn.tests.test_gmm_hmm_new.TestGMMHMM_MultiSequence object at 0xffff794b3e00>
covtype = 'full', init_params = 'mcw'
@pytest.mark.parametrize("covtype",
["diag", "spherical", "tied", "full"])
def test_chunked(sellf, covtype, init_params='mcw'):
np.random.seed(0)
gmm = create_random_gmm(3, 2, covariance_type=covtype, prng=0)
gmm.covariances_ = gmm.covars_
data = gmm.sample(n_samples=1000)[0]
model1 = GMMHMM(n_components=3, n_mix=2, covariance_type=covtype,
random_state=1, init_params=init_params)
model2 = GMMHMM(n_components=3, n_mix=2, covariance_type=covtype,
random_state=1, init_params=init_params)
# don't use random parameters for testing
init = 1. / model1.n_components
for model in (model1, model2):
model.startprob_ = np.full(model.n_components, init)
model.transmat_ = \
np.full((model.n_components, model.n_components), init)
> model1.fit(data)
hmmlearn/tests/test_gmm_hmm_new.py:259:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GMMHMM(covariance_type='full',
covars_prior=array([[[[0., 0.],
[0., 0.]],
[[0., 0.],
... n_components=3, n_mix=2, random_state=1,
weights_prior=array([[1., 1.],
[1., 1.],
[1., 1.]]))
X = array([[-20.51255292, -17.67431134],
[-15.84831228, -16.50504373],
[-21.40806672, -17.58054428],
...,
[-12.05683236, -0.58197627],
[-11.42658201, -1.42127957],
[-12.15481108, -0.76401566]])
def _fit_log(self, X):
log_frameprob = self._compute_log_likelihood(X)
> log_prob, fwdlattice = _hmmc.forward_log(
self.startprob_, self.transmat_, log_frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:864: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
________________ TestMultinomialHMM.test_score_samples[scaling] ________________
self = <hmmlearn.tests.test_multinomial_hmm.TestMultinomialHMM object at 0xffff78ff1130>
implementation = 'scaling'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_score_samples(self, implementation):
X = np.array([
[1, 1, 3, 0],
[3, 1, 1, 0],
[3, 0, 2, 0],
[2, 2, 0, 1],
[2, 2, 0, 1],
[0, 1, 1, 3],
[1, 0, 3, 1],
[2, 0, 1, 2],
[0, 2, 1, 2],
[1, 0, 1, 3],
])
n_samples = X.shape[0]
h = self.new_hmm(implementation)
> ll, posteriors = h.score_samples(X)
hmmlearn/tests/test_multinomial_hmm.py:53:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:201: in score_samples
return self._score(X, lengths, compute_posteriors=True)
hmmlearn/base.py:244: in _score
return impl(
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = MultinomialHMM(implementation='scaling', n_components=2, n_trials=5)
X = array([[1, 1, 3, 0],
[3, 1, 1, 0],
[3, 0, 2, 0],
[2, 2, 0, 1],
[2, 2, 0, 1],
[0, 1, 1, 3],
[1, 0, 3, 1],
[2, 0, 1, 2],
[0, 2, 1, 2],
[1, 0, 1, 3]])
lengths = None
def _score_scaling(self, X, lengths=None, *, compute_posteriors):
log_prob = 0
sub_posteriors = [np.empty((0, self.n_components))]
for sub_X in _utils.split_X_lengths(X, lengths):
frameprob = self._compute_likelihood(sub_X)
> log_probij, fwdlattice, scaling_factors = _hmmc.forward_scaling(
self.startprob_, self.transmat_, frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:272: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
------------------------------ Captured log call -------------------------------
WARNING hmmlearn.hmm:hmm.py:883 MultinomialHMM has undergone major changes. The previous version was implementing a CategoricalHMM (a special case of MultinomialHMM). This new implementation follows the standard definition for a Multinomial distribution (e.g. as in https://en.wikipedia.org/wiki/Multinomial_distribution). See these issues for details:
https://github.com/hmmlearn/hmmlearn/issues/335
https://github.com/hmmlearn/hmmlearn/issues/340
__________________ TestMultinomialHMM.test_score_samples[log] __________________
self = <hmmlearn.tests.test_multinomial_hmm.TestMultinomialHMM object at 0xffff78ff0140>
implementation = 'log'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_score_samples(self, implementation):
X = np.array([
[1, 1, 3, 0],
[3, 1, 1, 0],
[3, 0, 2, 0],
[2, 2, 0, 1],
[2, 2, 0, 1],
[0, 1, 1, 3],
[1, 0, 3, 1],
[2, 0, 1, 2],
[0, 2, 1, 2],
[1, 0, 1, 3],
])
n_samples = X.shape[0]
h = self.new_hmm(implementation)
> ll, posteriors = h.score_samples(X)
hmmlearn/tests/test_multinomial_hmm.py:53:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:201: in score_samples
return self._score(X, lengths, compute_posteriors=True)
hmmlearn/base.py:244: in _score
return impl(
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = MultinomialHMM(n_components=2, n_trials=5)
X = array([[1, 1, 3, 0],
[3, 1, 1, 0],
[3, 0, 2, 0],
[2, 2, 0, 1],
[2, 2, 0, 1],
[0, 1, 1, 3],
[1, 0, 3, 1],
[2, 0, 1, 2],
[0, 2, 1, 2],
[1, 0, 1, 3]])
lengths = None
def _score_log(self, X, lengths=None, *, compute_posteriors):
"""
Compute the log probability under the model, as well as posteriors if
*compute_posteriors* is True (otherwise, an empty array is returned
for the latter).
"""
log_prob = 0
sub_posteriors = [np.empty((0, self.n_components))]
for sub_X in _utils.split_X_lengths(X, lengths):
log_frameprob = self._compute_log_likelihood(sub_X)
> log_probij, fwdlattice = _hmmc.forward_log(
self.startprob_, self.transmat_, log_frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:257: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
------------------------------ Captured log call -------------------------------
WARNING hmmlearn.hmm:hmm.py:883 MultinomialHMM has undergone major changes. The previous version was implementing a CategoricalHMM (a special case of MultinomialHMM). This new implementation follows the standard definition for a Multinomial distribution (e.g. as in https://en.wikipedia.org/wiki/Multinomial_distribution). See these issues for details:
https://github.com/hmmlearn/hmmlearn/issues/335
https://github.com/hmmlearn/hmmlearn/issues/340
_____________________ TestMultinomialHMM.test_fit[scaling] _____________________
self = <hmmlearn.tests.test_multinomial_hmm.TestMultinomialHMM object at 0xffff78ff0da0>
implementation = 'scaling', params = 'ste', n_iter = 5
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_fit(self, implementation, params='ste', n_iter=5):
h = self.new_hmm(implementation)
h.params = params
lengths = np.array([10] * 10)
X, _state_sequence = h.sample(lengths.sum())
# Mess up the parameters and see if we can re-learn them.
h.startprob_ = normalized(np.random.random(self.n_components))
h.transmat_ = normalized(
np.random.random((self.n_components, self.n_components)),
axis=1)
h.emissionprob_ = normalized(
np.random.random((self.n_components, self.n_features)),
axis=1)
# Also mess up trial counts.
h.n_trials = None
X[::2] *= 2
> assert_log_likelihood_increasing(h, X, lengths, n_iter)
hmmlearn/tests/test_multinomial_hmm.py:92:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/tests/__init__.py:46: in assert_log_likelihood_increasing
h.fit(X, lengths=lengths)
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = MultinomialHMM(implementation='scaling', init_params='', n_components=2,
n_iter=1,
n_tri..., 10, 5, 10, 5, 10, 5, 10, 5, 10, 5, 10, 5]),
random_state=RandomState(MT19937) at 0xFFFF7F6E3640)
X = array([[4, 6, 0, 0],
[3, 1, 0, 1],
[2, 2, 6, 0],
[0, 2, 3, 0],
[0, 0, 4, 6],
[3, 0, 0, 2],
[2, 0, 4, 4],
[1, 0, 2, 2],
[2, 4, 0, 4],
[3, 2, 0, 0]])
def _fit_scaling(self, X):
frameprob = self._compute_likelihood(X)
> log_prob, fwdlattice, scaling_factors = _hmmc.forward_scaling(
self.startprob_, self.transmat_, frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:855: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
------------------------------ Captured log call -------------------------------
WARNING hmmlearn.hmm:hmm.py:883 MultinomialHMM has undergone major changes. The previous version was implementing a CategoricalHMM (a special case of MultinomialHMM). This new implementation follows the standard definition for a Multinomial distribution (e.g. as in https://en.wikipedia.org/wiki/Multinomial_distribution). See these issues for details:
https://github.com/hmmlearn/hmmlearn/issues/335
https://github.com/hmmlearn/hmmlearn/issues/340
_______________________ TestMultinomialHMM.test_fit[log] _______________________
self = <hmmlearn.tests.test_multinomial_hmm.TestMultinomialHMM object at 0xffff78ff1bb0>
implementation = 'log', params = 'ste', n_iter = 5
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_fit(self, implementation, params='ste', n_iter=5):
h = self.new_hmm(implementation)
h.params = params
lengths = np.array([10] * 10)
X, _state_sequence = h.sample(lengths.sum())
# Mess up the parameters and see if we can re-learn them.
h.startprob_ = normalized(np.random.random(self.n_components))
h.transmat_ = normalized(
np.random.random((self.n_components, self.n_components)),
axis=1)
h.emissionprob_ = normalized(
np.random.random((self.n_components, self.n_features)),
axis=1)
# Also mess up trial counts.
h.n_trials = None
X[::2] *= 2
> assert_log_likelihood_increasing(h, X, lengths, n_iter)
hmmlearn/tests/test_multinomial_hmm.py:92:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/tests/__init__.py:46: in assert_log_likelihood_increasing
h.fit(X, lengths=lengths)
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = MultinomialHMM(init_params='', n_components=2, n_iter=1,
n_trials=array([10, 5, 10, 5, 10, 5, 10, 5..., 10, 5, 10, 5, 10, 5, 10, 5, 10, 5, 10, 5]),
random_state=RandomState(MT19937) at 0xFFFF7F6E3640)
X = array([[0, 0, 6, 4],
[4, 0, 1, 0],
[8, 2, 0, 0],
[2, 2, 0, 1],
[8, 2, 0, 0],
[1, 2, 1, 1],
[2, 4, 0, 4],
[2, 2, 0, 1],
[6, 2, 2, 0],
[0, 1, 2, 2]])
def _fit_log(self, X):
log_frameprob = self._compute_log_likelihood(X)
> log_prob, fwdlattice = _hmmc.forward_log(
self.startprob_, self.transmat_, log_frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:864: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
------------------------------ Captured log call -------------------------------
WARNING hmmlearn.hmm:hmm.py:883 MultinomialHMM has undergone major changes. The previous version was implementing a CategoricalHMM (a special case of MultinomialHMM). This new implementation follows the standard definition for a Multinomial distribution (e.g. as in https://en.wikipedia.org/wiki/Multinomial_distribution). See these issues for details:
https://github.com/hmmlearn/hmmlearn/issues/335
https://github.com/hmmlearn/hmmlearn/issues/340
______________ TestMultinomialHMM.test_fit_emissionprob[scaling] _______________
self = <hmmlearn.tests.test_multinomial_hmm.TestMultinomialHMM object at 0xffff78ff1dc0>
implementation = 'scaling'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_fit_emissionprob(self, implementation):
> self.test_fit(implementation, 'e')
hmmlearn/tests/test_multinomial_hmm.py:96:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/tests/test_multinomial_hmm.py:92: in test_fit
assert_log_likelihood_increasing(h, X, lengths, n_iter)
hmmlearn/tests/__init__.py:46: in assert_log_likelihood_increasing
h.fit(X, lengths=lengths)
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = MultinomialHMM(implementation='scaling', init_params='', n_components=2,
n_iter=1,
n_tri..., 5, 10, 5, 10, 5, 10, 5, 10, 5]),
params='e', random_state=RandomState(MT19937) at 0xFFFF7F6E3640)
X = array([[0, 6, 4, 0],
[0, 1, 2, 2],
[0, 0, 6, 4],
[0, 2, 1, 2],
[6, 0, 2, 2],
[1, 3, 0, 1],
[8, 2, 0, 0],
[3, 2, 0, 0],
[6, 4, 0, 0],
[5, 0, 0, 0]])
def _fit_scaling(self, X):
frameprob = self._compute_likelihood(X)
> log_prob, fwdlattice, scaling_factors = _hmmc.forward_scaling(
self.startprob_, self.transmat_, frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:855: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
------------------------------ Captured log call -------------------------------
WARNING hmmlearn.hmm:hmm.py:883 MultinomialHMM has undergone major changes. The previous version was implementing a CategoricalHMM (a special case of MultinomialHMM). This new implementation follows the standard definition for a Multinomial distribution (e.g. as in https://en.wikipedia.org/wiki/Multinomial_distribution). See these issues for details:
https://github.com/hmmlearn/hmmlearn/issues/335
https://github.com/hmmlearn/hmmlearn/issues/340
________________ TestMultinomialHMM.test_fit_emissionprob[log] _________________
self = <hmmlearn.tests.test_multinomial_hmm.TestMultinomialHMM object at 0xffff78ff1f40>
implementation = 'log'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_fit_emissionprob(self, implementation):
> self.test_fit(implementation, 'e')
hmmlearn/tests/test_multinomial_hmm.py:96:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/tests/test_multinomial_hmm.py:92: in test_fit
assert_log_likelihood_increasing(h, X, lengths, n_iter)
hmmlearn/tests/__init__.py:46: in assert_log_likelihood_increasing
h.fit(X, lengths=lengths)
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = MultinomialHMM(init_params='', n_components=2, n_iter=1,
n_trials=array([10, 5, 10, 5, 10, 5, 10, 5..., 5, 10, 5, 10, 5, 10, 5, 10, 5]),
params='e', random_state=RandomState(MT19937) at 0xFFFF7F6E3640)
X = array([[6, 4, 0, 0],
[4, 1, 0, 0],
[4, 2, 2, 2],
[1, 2, 1, 1],
[2, 0, 4, 4],
[0, 0, 5, 0],
[6, 2, 0, 2],
[3, 2, 0, 0],
[0, 0, 6, 4],
[0, 0, 1, 4]])
def _fit_log(self, X):
log_frameprob = self._compute_log_likelihood(X)
> log_prob, fwdlattice = _hmmc.forward_log(
self.startprob_, self.transmat_, log_frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:864: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
------------------------------ Captured log call -------------------------------
WARNING hmmlearn.hmm:hmm.py:883 MultinomialHMM has undergone major changes. The previous version was implementing a CategoricalHMM (a special case of MultinomialHMM). This new implementation follows the standard definition for a Multinomial distribution (e.g. as in https://en.wikipedia.org/wiki/Multinomial_distribution). See these issues for details:
https://github.com/hmmlearn/hmmlearn/issues/335
https://github.com/hmmlearn/hmmlearn/issues/340
________________ TestMultinomialHMM.test_fit_with_init[scaling] ________________
self = <hmmlearn.tests.test_multinomial_hmm.TestMultinomialHMM object at 0xffff78ff20f0>
implementation = 'scaling', params = 'ste', n_iter = 5
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_fit_with_init(self, implementation, params='ste', n_iter=5):
lengths = [10] * 10
h = self.new_hmm(implementation)
X, _state_sequence = h.sample(sum(lengths))
# use init_function to initialize paramerters
h = hmm.MultinomialHMM(
n_components=self.n_components, n_trials=self.n_trials,
params=params, init_params=params)
h._init(X, lengths)
> assert_log_likelihood_increasing(h, X, lengths, n_iter)
hmmlearn/tests/test_multinomial_hmm.py:110:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/tests/__init__.py:46: in assert_log_likelihood_increasing
h.fit(X, lengths=lengths)
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = MultinomialHMM(init_params='', n_components=2, n_iter=1, n_trials=5,
random_state=RandomState(MT19937) at 0xFFFF7F6E3640)
X = array([[0, 0, 3, 2],
[1, 2, 1, 1],
[3, 1, 1, 0],
[4, 1, 0, 0],
[1, 0, 2, 2],
[0, 0, 3, 2],
[1, 1, 3, 0],
[0, 1, 1, 3],
[3, 0, 1, 1],
[0, 0, 3, 2]])
def _fit_log(self, X):
log_frameprob = self._compute_log_likelihood(X)
> log_prob, fwdlattice = _hmmc.forward_log(
self.startprob_, self.transmat_, log_frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:864: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
------------------------------ Captured log call -------------------------------
WARNING hmmlearn.hmm:hmm.py:883 MultinomialHMM has undergone major changes. The previous version was implementing a CategoricalHMM (a special case of MultinomialHMM). This new implementation follows the standard definition for a Multinomial distribution (e.g. as in https://en.wikipedia.org/wiki/Multinomial_distribution). See these issues for details:
https://github.com/hmmlearn/hmmlearn/issues/335
https://github.com/hmmlearn/hmmlearn/issues/340
WARNING hmmlearn.hmm:hmm.py:883 MultinomialHMM has undergone major changes. The previous version was implementing a CategoricalHMM (a special case of MultinomialHMM). This new implementation follows the standard definition for a Multinomial distribution (e.g. as in https://en.wikipedia.org/wiki/Multinomial_distribution). See these issues for details:
https://github.com/hmmlearn/hmmlearn/issues/335
https://github.com/hmmlearn/hmmlearn/issues/340
__________________ TestMultinomialHMM.test_fit_with_init[log] __________________
self = <hmmlearn.tests.test_multinomial_hmm.TestMultinomialHMM object at 0xffff78ff2270>
implementation = 'log', params = 'ste', n_iter = 5
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_fit_with_init(self, implementation, params='ste', n_iter=5):
lengths = [10] * 10
h = self.new_hmm(implementation)
X, _state_sequence = h.sample(sum(lengths))
# use init_function to initialize paramerters
h = hmm.MultinomialHMM(
n_components=self.n_components, n_trials=self.n_trials,
params=params, init_params=params)
h._init(X, lengths)
> assert_log_likelihood_increasing(h, X, lengths, n_iter)
hmmlearn/tests/test_multinomial_hmm.py:110:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/tests/__init__.py:46: in assert_log_likelihood_increasing
h.fit(X, lengths=lengths)
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = MultinomialHMM(init_params='', n_components=2, n_iter=1, n_trials=5,
random_state=RandomState(MT19937) at 0xFFFF7F6E3640)
X = array([[4, 0, 0, 1],
[0, 1, 2, 2],
[0, 1, 0, 4],
[3, 1, 1, 0],
[0, 0, 1, 4],
[0, 1, 2, 2],
[1, 0, 2, 2],
[0, 0, 4, 1],
[0, 1, 3, 1],
[3, 2, 0, 0]])
def _fit_log(self, X):
log_frameprob = self._compute_log_likelihood(X)
> log_prob, fwdlattice = _hmmc.forward_log(
self.startprob_, self.transmat_, log_frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:864: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
------------------------------ Captured log call -------------------------------
WARNING hmmlearn.hmm:hmm.py:883 MultinomialHMM has undergone major changes. The previous version was implementing a CategoricalHMM (a special case of MultinomialHMM). This new implementation follows the standard definition for a Multinomial distribution (e.g. as in https://en.wikipedia.org/wiki/Multinomial_distribution). See these issues for details:
https://github.com/hmmlearn/hmmlearn/issues/335
https://github.com/hmmlearn/hmmlearn/issues/340
WARNING hmmlearn.hmm:hmm.py:883 MultinomialHMM has undergone major changes. The previous version was implementing a CategoricalHMM (a special case of MultinomialHMM). This new implementation follows the standard definition for a Multinomial distribution (e.g. as in https://en.wikipedia.org/wiki/Multinomial_distribution). See these issues for details:
https://github.com/hmmlearn/hmmlearn/issues/335
https://github.com/hmmlearn/hmmlearn/issues/340
________ TestMultinomialHMM.test_compare_with_categorical_hmm[scaling] _________
self = <hmmlearn.tests.test_multinomial_hmm.TestMultinomialHMM object at 0xffff78ff27b0>
implementation = 'scaling'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_compare_with_categorical_hmm(self, implementation):
n_components = 2 # ['Rainy', 'Sunny']
n_features = 3 # ['walk', 'shop', 'clean']
n_trials = 1
startprob = np.array([0.6, 0.4])
transmat = np.array([[0.7, 0.3], [0.4, 0.6]])
emissionprob = np.array([[0.1, 0.4, 0.5],
[0.6, 0.3, 0.1]])
h1 = hmm.MultinomialHMM(
n_components=n_components, n_trials=n_trials,
implementation=implementation)
h2 = hmm.CategoricalHMM(
n_components=n_components, implementation=implementation)
h1.startprob_ = startprob
h2.startprob_ = startprob
h1.transmat_ = transmat
h2.transmat_ = transmat
h1.emissionprob_ = emissionprob
h2.emissionprob_ = emissionprob
X1 = np.array([[1, 0, 0],
[0, 1, 0],
[0, 0, 1]])
X2 = [[0], [1], [2]] # different input format for CategoricalHMM
> log_prob1, state_sequence1 = h1.decode(X1, algorithm="viterbi")
hmmlearn/tests/test_multinomial_hmm.py:161:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:340: in decode
sub_log_prob, sub_state_sequence = decoder(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = MultinomialHMM(implementation='scaling', n_components=2, n_trials=1)
X = array([[1, 0, 0],
[0, 1, 0],
[0, 0, 1]])
def _decode_viterbi(self, X):
log_frameprob = self._compute_log_likelihood(X)
> return _hmmc.viterbi(self.startprob_, self.transmat_, log_frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:286: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
------------------------------ Captured log call -------------------------------
WARNING hmmlearn.hmm:hmm.py:883 MultinomialHMM has undergone major changes. The previous version was implementing a CategoricalHMM (a special case of MultinomialHMM). This new implementation follows the standard definition for a Multinomial distribution (e.g. as in https://en.wikipedia.org/wiki/Multinomial_distribution). See these issues for details:
https://github.com/hmmlearn/hmmlearn/issues/335
https://github.com/hmmlearn/hmmlearn/issues/340
__________ TestMultinomialHMM.test_compare_with_categorical_hmm[log] ___________
self = <hmmlearn.tests.test_multinomial_hmm.TestMultinomialHMM object at 0xffff78ff2930>
implementation = 'log'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_compare_with_categorical_hmm(self, implementation):
n_components = 2 # ['Rainy', 'Sunny']
n_features = 3 # ['walk', 'shop', 'clean']
n_trials = 1
startprob = np.array([0.6, 0.4])
transmat = np.array([[0.7, 0.3], [0.4, 0.6]])
emissionprob = np.array([[0.1, 0.4, 0.5],
[0.6, 0.3, 0.1]])
h1 = hmm.MultinomialHMM(
n_components=n_components, n_trials=n_trials,
implementation=implementation)
h2 = hmm.CategoricalHMM(
n_components=n_components, implementation=implementation)
h1.startprob_ = startprob
h2.startprob_ = startprob
h1.transmat_ = transmat
h2.transmat_ = transmat
h1.emissionprob_ = emissionprob
h2.emissionprob_ = emissionprob
X1 = np.array([[1, 0, 0],
[0, 1, 0],
[0, 0, 1]])
X2 = [[0], [1], [2]] # different input format for CategoricalHMM
> log_prob1, state_sequence1 = h1.decode(X1, algorithm="viterbi")
hmmlearn/tests/test_multinomial_hmm.py:161:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:340: in decode
sub_log_prob, sub_state_sequence = decoder(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = MultinomialHMM(n_components=2, n_trials=1)
X = array([[1, 0, 0],
[0, 1, 0],
[0, 0, 1]])
def _decode_viterbi(self, X):
log_frameprob = self._compute_log_likelihood(X)
> return _hmmc.viterbi(self.startprob_, self.transmat_, log_frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:286: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
------------------------------ Captured log call -------------------------------
WARNING hmmlearn.hmm:hmm.py:883 MultinomialHMM has undergone major changes. The previous version was implementing a CategoricalHMM (a special case of MultinomialHMM). This new implementation follows the standard definition for a Multinomial distribution (e.g. as in https://en.wikipedia.org/wiki/Multinomial_distribution). See these issues for details:
https://github.com/hmmlearn/hmmlearn/issues/335
https://github.com/hmmlearn/hmmlearn/issues/340
__________________ TestPoissonHMM.test_score_samples[scaling] __________________
self = <hmmlearn.tests.test_poisson_hmm.TestPoissonHMM object at 0xffff78ff1c70>
implementation = 'scaling', n_samples = 1000
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_score_samples(self, implementation, n_samples=1000):
h = self.new_hmm(implementation)
X, state_sequence = h.sample(n_samples)
assert X.ndim == 2
assert len(X) == len(state_sequence) == n_samples
> ll, posteriors = h.score_samples(X)
hmmlearn/tests/test_poisson_hmm.py:40:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:201: in score_samples
return self._score(X, lengths, compute_posteriors=True)
hmmlearn/base.py:244: in _score
return impl(
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = PoissonHMM(implementation='scaling', n_components=2, random_state=0)
X = array([[5, 4, 3],
[7, 1, 6],
[6, 1, 4],
...,
[1, 5, 0],
[1, 6, 0],
[2, 3, 0]])
lengths = None
def _score_scaling(self, X, lengths=None, *, compute_posteriors):
log_prob = 0
sub_posteriors = [np.empty((0, self.n_components))]
for sub_X in _utils.split_X_lengths(X, lengths):
frameprob = self._compute_likelihood(sub_X)
> log_probij, fwdlattice, scaling_factors = _hmmc.forward_scaling(
self.startprob_, self.transmat_, frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:272: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
____________________ TestPoissonHMM.test_score_samples[log] ____________________
self = <hmmlearn.tests.test_poisson_hmm.TestPoissonHMM object at 0xffff78ff2fc0>
implementation = 'log', n_samples = 1000
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_score_samples(self, implementation, n_samples=1000):
h = self.new_hmm(implementation)
X, state_sequence = h.sample(n_samples)
assert X.ndim == 2
assert len(X) == len(state_sequence) == n_samples
> ll, posteriors = h.score_samples(X)
hmmlearn/tests/test_poisson_hmm.py:40:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:201: in score_samples
return self._score(X, lengths, compute_posteriors=True)
hmmlearn/base.py:244: in _score
return impl(
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = PoissonHMM(n_components=2, random_state=0)
X = array([[5, 4, 3],
[7, 1, 6],
[6, 1, 4],
...,
[1, 5, 0],
[1, 6, 0],
[2, 3, 0]])
lengths = None
def _score_log(self, X, lengths=None, *, compute_posteriors):
"""
Compute the log probability under the model, as well as posteriors if
*compute_posteriors* is True (otherwise, an empty array is returned
for the latter).
"""
log_prob = 0
sub_posteriors = [np.empty((0, self.n_components))]
for sub_X in _utils.split_X_lengths(X, lengths):
log_frameprob = self._compute_log_likelihood(sub_X)
> log_probij, fwdlattice = _hmmc.forward_log(
self.startprob_, self.transmat_, log_frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:257: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
_______________________ TestPoissonHMM.test_fit[scaling] _______________________
self = <hmmlearn.tests.test_poisson_hmm.TestPoissonHMM object at 0xffff78ff3170>
implementation = 'scaling', params = 'stl', n_iter = 5
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_fit(self, implementation, params='stl', n_iter=5):
h = self.new_hmm(implementation)
h.params = params
lengths = np.array([10] * 10)
X, _state_sequence = h.sample(lengths.sum())
# Mess up the parameters and see if we can re-learn them.
np.random.seed(0)
h.startprob_ = normalized(np.random.random(self.n_components))
h.transmat_ = normalized(
np.random.random((self.n_components, self.n_components)),
axis=1)
h.lambdas_ = np.random.gamma(
shape=2, size=(self.n_components, self.n_features))
> assert_log_likelihood_increasing(h, X, lengths, n_iter)
hmmlearn/tests/test_poisson_hmm.py:62:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/tests/__init__.py:46: in assert_log_likelihood_increasing
h.fit(X, lengths=lengths)
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = PoissonHMM(implementation='scaling', init_params='', n_components=2, n_iter=1,
random_state=RandomState(MT19937) at 0xFFFF78EA4E40)
X = array([[5, 4, 3],
[7, 1, 6],
[6, 1, 4],
[3, 0, 4],
[2, 0, 4],
[4, 3, 0],
[0, 5, 1],
[0, 4, 0],
[4, 2, 7],
[4, 4, 0]])
def _fit_scaling(self, X):
frameprob = self._compute_likelihood(X)
> log_prob, fwdlattice, scaling_factors = _hmmc.forward_scaling(
self.startprob_, self.transmat_, frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:855: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
_________________________ TestPoissonHMM.test_fit[log] _________________________
self = <hmmlearn.tests.test_poisson_hmm.TestPoissonHMM object at 0xffff78ff32f0>
implementation = 'log', params = 'stl', n_iter = 5
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_fit(self, implementation, params='stl', n_iter=5):
h = self.new_hmm(implementation)
h.params = params
lengths = np.array([10] * 10)
X, _state_sequence = h.sample(lengths.sum())
# Mess up the parameters and see if we can re-learn them.
np.random.seed(0)
h.startprob_ = normalized(np.random.random(self.n_components))
h.transmat_ = normalized(
np.random.random((self.n_components, self.n_components)),
axis=1)
h.lambdas_ = np.random.gamma(
shape=2, size=(self.n_components, self.n_features))
> assert_log_likelihood_increasing(h, X, lengths, n_iter)
hmmlearn/tests/test_poisson_hmm.py:62:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/tests/__init__.py:46: in assert_log_likelihood_increasing
h.fit(X, lengths=lengths)
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = PoissonHMM(init_params='', n_components=2, n_iter=1,
random_state=RandomState(MT19937) at 0xFFFF78EA5940)
X = array([[5, 4, 3],
[7, 1, 6],
[6, 1, 4],
[3, 0, 4],
[2, 0, 4],
[4, 3, 0],
[0, 5, 1],
[0, 4, 0],
[4, 2, 7],
[4, 4, 0]])
def _fit_log(self, X):
log_frameprob = self._compute_log_likelihood(X)
> log_prob, fwdlattice = _hmmc.forward_log(
self.startprob_, self.transmat_, log_frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:864: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
___________________ TestPoissonHMM.test_fit_lambdas[scaling] ___________________
self = <hmmlearn.tests.test_poisson_hmm.TestPoissonHMM object at 0xffff78ff3500>
implementation = 'scaling'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_fit_lambdas(self, implementation):
> self.test_fit(implementation, 'l')
hmmlearn/tests/test_poisson_hmm.py:66:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/tests/test_poisson_hmm.py:62: in test_fit
assert_log_likelihood_increasing(h, X, lengths, n_iter)
hmmlearn/tests/__init__.py:46: in assert_log_likelihood_increasing
h.fit(X, lengths=lengths)
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = PoissonHMM(implementation='scaling', init_params='', n_components=2, n_iter=1,
params='l', random_state=RandomState(MT19937) at 0xFFFF78E89C40)
X = array([[5, 4, 3],
[7, 1, 6],
[6, 1, 4],
[3, 0, 4],
[2, 0, 4],
[4, 3, 0],
[0, 5, 1],
[0, 4, 0],
[4, 2, 7],
[4, 4, 0]])
def _fit_scaling(self, X):
frameprob = self._compute_likelihood(X)
> log_prob, fwdlattice, scaling_factors = _hmmc.forward_scaling(
self.startprob_, self.transmat_, frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:855: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
_____________________ TestPoissonHMM.test_fit_lambdas[log] _____________________
self = <hmmlearn.tests.test_poisson_hmm.TestPoissonHMM object at 0xffff78ff3710>
implementation = 'log'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_fit_lambdas(self, implementation):
> self.test_fit(implementation, 'l')
hmmlearn/tests/test_poisson_hmm.py:66:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/tests/test_poisson_hmm.py:62: in test_fit
assert_log_likelihood_increasing(h, X, lengths, n_iter)
hmmlearn/tests/__init__.py:46: in assert_log_likelihood_increasing
h.fit(X, lengths=lengths)
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = PoissonHMM(init_params='', n_components=2, n_iter=1, params='l',
random_state=RandomState(MT19937) at 0xFFFF78E8B640)
X = array([[5, 4, 3],
[7, 1, 6],
[6, 1, 4],
[3, 0, 4],
[2, 0, 4],
[4, 3, 0],
[0, 5, 1],
[0, 4, 0],
[4, 2, 7],
[4, 4, 0]])
def _fit_log(self, X):
log_frameprob = self._compute_log_likelihood(X)
> log_prob, fwdlattice = _hmmc.forward_log(
self.startprob_, self.transmat_, log_frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:864: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
__________________ TestPoissonHMM.test_fit_with_init[scaling] __________________
self = <hmmlearn.tests.test_poisson_hmm.TestPoissonHMM object at 0xffff78ff38c0>
implementation = 'scaling', params = 'stl', n_iter = 5
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_fit_with_init(self, implementation, params='stl', n_iter=5):
lengths = [10] * 10
h = self.new_hmm(implementation)
X, _state_sequence = h.sample(sum(lengths))
# use init_function to initialize paramerters
h = hmm.PoissonHMM(self.n_components, params=params,
init_params=params)
h._init(X, lengths)
> assert_log_likelihood_increasing(h, X, lengths, n_iter)
hmmlearn/tests/test_poisson_hmm.py:79:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/tests/__init__.py:46: in assert_log_likelihood_increasing
h.fit(X, lengths=lengths)
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = PoissonHMM(init_params='', n_components=2, n_iter=1,
random_state=RandomState(MT19937) at 0xFFFF7F6E3640)
X = array([[5, 4, 3],
[7, 1, 6],
[6, 1, 4],
[3, 0, 4],
[2, 0, 4],
[4, 3, 0],
[0, 5, 1],
[0, 4, 0],
[4, 2, 7],
[4, 4, 0]])
def _fit_log(self, X):
log_frameprob = self._compute_log_likelihood(X)
> log_prob, fwdlattice = _hmmc.forward_log(
self.startprob_, self.transmat_, log_frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:864: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
____________________ TestPoissonHMM.test_fit_with_init[log] ____________________
self = <hmmlearn.tests.test_poisson_hmm.TestPoissonHMM object at 0xffff78ff3a40>
implementation = 'log', params = 'stl', n_iter = 5
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_fit_with_init(self, implementation, params='stl', n_iter=5):
lengths = [10] * 10
h = self.new_hmm(implementation)
X, _state_sequence = h.sample(sum(lengths))
# use init_function to initialize paramerters
h = hmm.PoissonHMM(self.n_components, params=params,
init_params=params)
h._init(X, lengths)
> assert_log_likelihood_increasing(h, X, lengths, n_iter)
hmmlearn/tests/test_poisson_hmm.py:79:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/tests/__init__.py:46: in assert_log_likelihood_increasing
h.fit(X, lengths=lengths)
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = PoissonHMM(init_params='', n_components=2, n_iter=1,
random_state=RandomState(MT19937) at 0xFFFF7F6E3640)
X = array([[5, 4, 3],
[7, 1, 6],
[6, 1, 4],
[3, 0, 4],
[2, 0, 4],
[4, 3, 0],
[0, 5, 1],
[0, 4, 0],
[4, 2, 7],
[4, 4, 0]])
def _fit_log(self, X):
log_frameprob = self._compute_log_likelihood(X)
> log_prob, fwdlattice = _hmmc.forward_log(
self.startprob_, self.transmat_, log_frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:864: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
____________________ TestPoissonHMM.test_criterion[scaling] ____________________
self = <hmmlearn.tests.test_poisson_hmm.TestPoissonHMM object at 0xffff78ff3bf0>
implementation = 'scaling'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_criterion(self, implementation):
random_state = check_random_state(412)
m1 = self.new_hmm(implementation)
X, _ = m1.sample(2000, random_state=random_state)
aic = []
bic = []
ns = [2, 3, 4]
for n in ns:
h = hmm.PoissonHMM(n, n_iter=500,
random_state=random_state, implementation=implementation)
> h.fit(X)
hmmlearn/tests/test_poisson_hmm.py:93:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = PoissonHMM(implementation='scaling', n_components=2, n_iter=500,
random_state=RandomState(MT19937) at 0xFFFF78EA7B40)
X = array([[1, 5, 0],
[3, 5, 0],
[4, 1, 4],
...,
[1, 4, 0],
[3, 6, 0],
[5, 0, 4]])
def _fit_scaling(self, X):
frameprob = self._compute_likelihood(X)
> log_prob, fwdlattice, scaling_factors = _hmmc.forward_scaling(
self.startprob_, self.transmat_, frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:855: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
______________________ TestPoissonHMM.test_criterion[log] ______________________
self = <hmmlearn.tests.test_poisson_hmm.TestPoissonHMM object at 0xffff78ff3dd0>
implementation = 'log'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_criterion(self, implementation):
random_state = check_random_state(412)
m1 = self.new_hmm(implementation)
X, _ = m1.sample(2000, random_state=random_state)
aic = []
bic = []
ns = [2, 3, 4]
for n in ns:
h = hmm.PoissonHMM(n, n_iter=500,
random_state=random_state, implementation=implementation)
> h.fit(X)
hmmlearn/tests/test_poisson_hmm.py:93:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = PoissonHMM(n_components=2, n_iter=500,
random_state=RandomState(MT19937) at 0xFFFF78EA5040)
X = array([[1, 5, 0],
[3, 5, 0],
[4, 1, 4],
...,
[1, 4, 0],
[3, 6, 0],
[5, 0, 4]])
def _fit_log(self, X):
log_frameprob = self._compute_log_likelihood(X)
> log_prob, fwdlattice = _hmmc.forward_log(
self.startprob_, self.transmat_, log_frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:864: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
_____________ TestVariationalCategorical.test_init_priors[scaling] _____________
self = <hmmlearn.tests.test_variational_categorical.TestVariationalCategorical object at 0xffff7903c710>
implementation = 'scaling'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_init_priors(self, implementation):
sequences, lengths = self.get_from_one_beal(7, 100, None)
model = vhmm.VariationalCategoricalHMM(
4, n_iter=500, random_state=1984, init_params="",
implementation=implementation)
model.pi_prior_ = np.full((4,), .25)
model.pi_posterior_ = np.full((4,), 7/4)
model.transmat_prior_ = np.full((4, 4), .25)
model.transmat_posterior_ = np.full((4, 4), 7/4)
model.emissionprob_prior_ = np.full((4, 3), 1/3)
model.emissionprob_posterior_ = np.asarray([[.3, .4, .3],
[.8, .1, .1],
[.2, .2, .6],
[.2, .6, .2]])
> assert_log_likelihood_increasing(model, sequences, lengths, 10)
hmmlearn/tests/test_variational_categorical.py:73:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/tests/__init__.py:46: in assert_log_likelihood_increasing
h.fit(X, lengths=lengths)
hmmlearn/_emissions.py:27: in <lambda>
return functools.wraps(func)(lambda *args, **kwargs: func(*args, **kwargs))
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = VariationalCategoricalHMM(implementation='scaling', init_params='',
n_components=4, n_features=3, n_iter=1,
random_state=1984)
X = array([[0],
[1],
[2],
[0],
[1],
[2],
[0],
[1],
[2],
[0]... [0],
[1],
[2],
[0],
[1],
[2],
[0],
[1],
[2],
[0]])
def _fit_scaling(self, X):
frameprob = self._compute_subnorm_likelihood(X)
> logprob, fwdlattice, scaling_factors = _hmmc.forward_scaling(
self.startprob_subnorm_, self.transmat_subnorm_, frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:1080: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
_______________ TestVariationalCategorical.test_init_priors[log] _______________
self = <hmmlearn.tests.test_variational_categorical.TestVariationalCategorical object at 0xffff7903c8c0>
implementation = 'log'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_init_priors(self, implementation):
sequences, lengths = self.get_from_one_beal(7, 100, None)
model = vhmm.VariationalCategoricalHMM(
4, n_iter=500, random_state=1984, init_params="",
implementation=implementation)
model.pi_prior_ = np.full((4,), .25)
model.pi_posterior_ = np.full((4,), 7/4)
model.transmat_prior_ = np.full((4, 4), .25)
model.transmat_posterior_ = np.full((4, 4), 7/4)
model.emissionprob_prior_ = np.full((4, 3), 1/3)
model.emissionprob_posterior_ = np.asarray([[.3, .4, .3],
[.8, .1, .1],
[.2, .2, .6],
[.2, .6, .2]])
> assert_log_likelihood_increasing(model, sequences, lengths, 10)
hmmlearn/tests/test_variational_categorical.py:73:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/tests/__init__.py:46: in assert_log_likelihood_increasing
h.fit(X, lengths=lengths)
hmmlearn/_emissions.py:27: in <lambda>
return functools.wraps(func)(lambda *args, **kwargs: func(*args, **kwargs))
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = VariationalCategoricalHMM(init_params='', n_components=4, n_features=3,
n_iter=1, random_state=1984)
X = array([[1],
[2],
[0],
[1],
[2],
[0],
[1],
[2],
[0],
[1]... [1],
[2],
[0],
[1],
[2],
[0],
[1],
[2],
[0],
[1]])
def _fit_log(self, X):
framelogprob = self._compute_subnorm_log_likelihood(X)
> logprob, fwdlattice = _hmmc.forward_log(
self.startprob_subnorm_, self.transmat_subnorm_, framelogprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:1091: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
_____________ TestVariationalCategorical.test_n_features[scaling] ______________
self = <hmmlearn.tests.test_variational_categorical.TestVariationalCategorical object at 0xffff7903d3a0>
implementation = 'scaling'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_n_features(self, implementation):
sequences, lengths = self.get_from_one_beal(7, 100, None)
# Learn n_Features
model = vhmm.VariationalCategoricalHMM(
4, implementation=implementation)
> assert_log_likelihood_increasing(model, sequences, lengths, 10)
hmmlearn/tests/test_variational_categorical.py:82:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/tests/__init__.py:46: in assert_log_likelihood_increasing
h.fit(X, lengths=lengths)
hmmlearn/_emissions.py:27: in <lambda>
return functools.wraps(func)(lambda *args, **kwargs: func(*args, **kwargs))
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = VariationalCategoricalHMM(implementation='scaling', init_params='',
n_components=4, n_features=3, n_iter=1)
X = array([[0],
[1],
[2],
[0],
[1],
[2],
[0],
[1],
[2],
[0]... [0],
[1],
[2],
[0],
[1],
[2],
[0],
[1],
[2],
[0]])
def _fit_scaling(self, X):
frameprob = self._compute_subnorm_likelihood(X)
> logprob, fwdlattice, scaling_factors = _hmmc.forward_scaling(
self.startprob_subnorm_, self.transmat_subnorm_, frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:1080: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
_______________ TestVariationalCategorical.test_n_features[log] ________________
self = <hmmlearn.tests.test_variational_categorical.TestVariationalCategorical object at 0xffff7903d520>
implementation = 'log'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_n_features(self, implementation):
sequences, lengths = self.get_from_one_beal(7, 100, None)
# Learn n_Features
model = vhmm.VariationalCategoricalHMM(
4, implementation=implementation)
> assert_log_likelihood_increasing(model, sequences, lengths, 10)
hmmlearn/tests/test_variational_categorical.py:82:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/tests/__init__.py:46: in assert_log_likelihood_increasing
h.fit(X, lengths=lengths)
hmmlearn/_emissions.py:27: in <lambda>
return functools.wraps(func)(lambda *args, **kwargs: func(*args, **kwargs))
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = VariationalCategoricalHMM(init_params='', n_components=4, n_features=3,
n_iter=1)
X = array([[1],
[2],
[0],
[1],
[2],
[0],
[1],
[2],
[0],
[1]... [1],
[2],
[0],
[1],
[2],
[0],
[1],
[2],
[0],
[1]])
def _fit_log(self, X):
framelogprob = self._compute_subnorm_log_likelihood(X)
> logprob, fwdlattice = _hmmc.forward_log(
self.startprob_subnorm_, self.transmat_subnorm_, framelogprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:1091: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
________ TestVariationalCategorical.test_init_incorrect_priors[scaling] ________
self = <hmmlearn.tests.test_variational_categorical.TestVariationalCategorical object at 0xffff7903d700>
implementation = 'scaling'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_init_incorrect_priors(self, implementation):
sequences, lengths = self.get_from_one_beal(7, 100, None)
# Test startprob shape
with pytest.raises(ValueError):
model = vhmm.VariationalCategoricalHMM(
4, n_iter=500, random_state=1984, init_params="te",
implementation=implementation)
model.startprob_prior_ = np.full((3,), .25)
model.startprob_posterior_ = np.full((4,), 7/4)
assert_log_likelihood_increasing(model, sequences, lengths, 10)
with pytest.raises(ValueError):
model = vhmm.VariationalCategoricalHMM(
4, n_iter=500, random_state=1984, init_params="te",
implementation=implementation)
model.startprob_prior_ = np.full((4,), .25)
model.startprob_posterior_ = np.full((3,), 7/4)
assert_log_likelihood_increasing(model, sequences, lengths, 10)
# Test transmat shape
with pytest.raises(ValueError):
model = vhmm.VariationalCategoricalHMM(
4, n_iter=500, random_state=1984, init_params="se",
implementation=implementation)
model.transmat_prior_ = np.full((3, 3), .25)
model.transmat_posterior_ = np.full((4, 4), .25)
assert_log_likelihood_increasing(model, sequences, lengths, 10)
with pytest.raises(ValueError):
model = vhmm.VariationalCategoricalHMM(
4, n_iter=500, random_state=1984, init_params="se",
implementation=implementation)
model.transmat_prior_ = np.full((4, 4), .25)
model.transmat_posterior_ = np.full((3, 3), 7/4)
assert_log_likelihood_increasing(model, sequences, lengths, 10)
# Test emission shape
with pytest.raises(ValueError):
model = vhmm.VariationalCategoricalHMM(
4, n_iter=500, random_state=1984, init_params="st",
implementation=implementation)
model.emissionprob_prior_ = np.full((3, 3), 1/3)
model.emissionprob_posterior_ = np.asarray([[.3, .4, .3],
[.8, .1, .1],
[.2, .2, .6],
[.2, .6, .2]])
assert_log_likelihood_increasing(model, sequences, lengths, 10)
# Test too many n_features
with pytest.raises(ValueError):
model = vhmm.VariationalCategoricalHMM(
4, n_iter=500, random_state=1984, init_params="se",
implementation=implementation)
model.emissionprob_prior_ = np.full((4, 4), 7/4)
model.emissionprob_posterior_ = np.full((4, 4), .25)
model.n_features_ = 10
assert_log_likelihood_increasing(model, sequences, lengths, 10)
# Too small n_features
with pytest.raises(ValueError):
model = vhmm.VariationalCategoricalHMM(
4, n_iter=500, random_state=1984, init_params="se",
implementation=implementation)
model.emissionprob_prior_ = np.full((4, 4), 7/4)
model.emissionprob_posterior_ = np.full((4, 4), .25)
model.n_features_ = 1
assert_log_likelihood_increasing(model, sequences, lengths, 10)
# Test that setting the desired prior value works
model = vhmm.VariationalCategoricalHMM(
4, n_iter=500, random_state=1984, init_params="ste",
implementation=implementation,
startprob_prior=1, transmat_prior=2, emissionprob_prior=3)
> assert_log_likelihood_increasing(model, sequences, lengths, 10)
hmmlearn/tests/test_variational_categorical.py:191:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/tests/__init__.py:46: in assert_log_likelihood_increasing
h.fit(X, lengths=lengths)
hmmlearn/_emissions.py:27: in <lambda>
return functools.wraps(func)(lambda *args, **kwargs: func(*args, **kwargs))
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = VariationalCategoricalHMM(emissionprob_prior=3, implementation='scaling',
init_params='', n_...,
n_iter=1, random_state=1984, startprob_prior=1,
transmat_prior=2)
X = array([[1],
[2],
[0],
[1],
[2],
[0],
[1],
[2],
[0],
[1]... [1],
[2],
[0],
[1],
[2],
[0],
[1],
[2],
[0],
[1]])
def _fit_scaling(self, X):
frameprob = self._compute_subnorm_likelihood(X)
> logprob, fwdlattice, scaling_factors = _hmmc.forward_scaling(
self.startprob_subnorm_, self.transmat_subnorm_, frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:1080: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
__________ TestVariationalCategorical.test_init_incorrect_priors[log] __________
self = <hmmlearn.tests.test_variational_categorical.TestVariationalCategorical object at 0xffff7903d880>
implementation = 'log'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_init_incorrect_priors(self, implementation):
sequences, lengths = self.get_from_one_beal(7, 100, None)
# Test startprob shape
with pytest.raises(ValueError):
model = vhmm.VariationalCategoricalHMM(
4, n_iter=500, random_state=1984, init_params="te",
implementation=implementation)
model.startprob_prior_ = np.full((3,), .25)
model.startprob_posterior_ = np.full((4,), 7/4)
assert_log_likelihood_increasing(model, sequences, lengths, 10)
with pytest.raises(ValueError):
model = vhmm.VariationalCategoricalHMM(
4, n_iter=500, random_state=1984, init_params="te",
implementation=implementation)
model.startprob_prior_ = np.full((4,), .25)
model.startprob_posterior_ = np.full((3,), 7/4)
assert_log_likelihood_increasing(model, sequences, lengths, 10)
# Test transmat shape
with pytest.raises(ValueError):
model = vhmm.VariationalCategoricalHMM(
4, n_iter=500, random_state=1984, init_params="se",
implementation=implementation)
model.transmat_prior_ = np.full((3, 3), .25)
model.transmat_posterior_ = np.full((4, 4), .25)
assert_log_likelihood_increasing(model, sequences, lengths, 10)
with pytest.raises(ValueError):
model = vhmm.VariationalCategoricalHMM(
4, n_iter=500, random_state=1984, init_params="se",
implementation=implementation)
model.transmat_prior_ = np.full((4, 4), .25)
model.transmat_posterior_ = np.full((3, 3), 7/4)
assert_log_likelihood_increasing(model, sequences, lengths, 10)
# Test emission shape
with pytest.raises(ValueError):
model = vhmm.VariationalCategoricalHMM(
4, n_iter=500, random_state=1984, init_params="st",
implementation=implementation)
model.emissionprob_prior_ = np.full((3, 3), 1/3)
model.emissionprob_posterior_ = np.asarray([[.3, .4, .3],
[.8, .1, .1],
[.2, .2, .6],
[.2, .6, .2]])
assert_log_likelihood_increasing(model, sequences, lengths, 10)
# Test too many n_features
with pytest.raises(ValueError):
model = vhmm.VariationalCategoricalHMM(
4, n_iter=500, random_state=1984, init_params="se",
implementation=implementation)
model.emissionprob_prior_ = np.full((4, 4), 7/4)
model.emissionprob_posterior_ = np.full((4, 4), .25)
model.n_features_ = 10
assert_log_likelihood_increasing(model, sequences, lengths, 10)
# Too small n_features
with pytest.raises(ValueError):
model = vhmm.VariationalCategoricalHMM(
4, n_iter=500, random_state=1984, init_params="se",
implementation=implementation)
model.emissionprob_prior_ = np.full((4, 4), 7/4)
model.emissionprob_posterior_ = np.full((4, 4), .25)
model.n_features_ = 1
assert_log_likelihood_increasing(model, sequences, lengths, 10)
# Test that setting the desired prior value works
model = vhmm.VariationalCategoricalHMM(
4, n_iter=500, random_state=1984, init_params="ste",
implementation=implementation,
startprob_prior=1, transmat_prior=2, emissionprob_prior=3)
> assert_log_likelihood_increasing(model, sequences, lengths, 10)
hmmlearn/tests/test_variational_categorical.py:191:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/tests/__init__.py:46: in assert_log_likelihood_increasing
h.fit(X, lengths=lengths)
hmmlearn/_emissions.py:27: in <lambda>
return functools.wraps(func)(lambda *args, **kwargs: func(*args, **kwargs))
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = VariationalCategoricalHMM(emissionprob_prior=3, init_params='', n_components=4,
n_features=3, n_iter=1, random_state=1984,
startprob_prior=1, transmat_prior=2)
X = array([[2],
[0],
[1],
[2],
[0],
[1],
[2],
[0],
[1],
[2]... [2],
[0],
[1],
[2],
[0],
[1],
[2],
[0],
[1],
[2]])
def _fit_log(self, X):
framelogprob = self._compute_subnorm_log_likelihood(X)
> logprob, fwdlattice = _hmmc.forward_log(
self.startprob_subnorm_, self.transmat_subnorm_, framelogprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:1091: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
______________ TestVariationalCategorical.test_fit_beal[scaling] _______________
self = <hmmlearn.tests.test_variational_categorical.TestVariationalCategorical object at 0xffff7903dac0>
implementation = 'scaling'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_fit_beal(self, implementation):
rs = check_random_state(1984)
m1, m2, m3 = self.get_beal_models()
sequences = []
lengths = []
for i in range(7):
for m in [m1, m2, m3]:
sequences.append(m.sample(39, random_state=rs)[0])
lengths.append(len(sequences[-1]))
sequences = np.concatenate(sequences)
model = vhmm.VariationalCategoricalHMM(12, n_iter=500,
implementation=implementation,
tol=1e-6,
random_state=rs,
verbose=False)
> assert_log_likelihood_increasing(model, sequences, lengths, 100)
hmmlearn/tests/test_variational_categorical.py:213:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/tests/__init__.py:46: in assert_log_likelihood_increasing
h.fit(X, lengths=lengths)
hmmlearn/_emissions.py:27: in <lambda>
return functools.wraps(func)(lambda *args, **kwargs: func(*args, **kwargs))
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = VariationalCategoricalHMM(implementation='scaling', init_params='',
n_components=12, n_features=3, n_iter=1,
random_state=RandomState(MT19937) at 0xFFFF78E89A40)
X = array([[0],
[1],
[2],
[0],
[1],
[2],
[0],
[1],
[2],
[0]... [2],
[0],
[1],
[2],
[0],
[1],
[2],
[0],
[1],
[2]])
def _fit_scaling(self, X):
frameprob = self._compute_subnorm_likelihood(X)
> logprob, fwdlattice, scaling_factors = _hmmc.forward_scaling(
self.startprob_subnorm_, self.transmat_subnorm_, frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:1080: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
________________ TestVariationalCategorical.test_fit_beal[log] _________________
self = <hmmlearn.tests.test_variational_categorical.TestVariationalCategorical object at 0xffff7903dc40>
implementation = 'log'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_fit_beal(self, implementation):
rs = check_random_state(1984)
m1, m2, m3 = self.get_beal_models()
sequences = []
lengths = []
for i in range(7):
for m in [m1, m2, m3]:
sequences.append(m.sample(39, random_state=rs)[0])
lengths.append(len(sequences[-1]))
sequences = np.concatenate(sequences)
model = vhmm.VariationalCategoricalHMM(12, n_iter=500,
implementation=implementation,
tol=1e-6,
random_state=rs,
verbose=False)
> assert_log_likelihood_increasing(model, sequences, lengths, 100)
hmmlearn/tests/test_variational_categorical.py:213:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/tests/__init__.py:46: in assert_log_likelihood_increasing
h.fit(X, lengths=lengths)
hmmlearn/_emissions.py:27: in <lambda>
return functools.wraps(func)(lambda *args, **kwargs: func(*args, **kwargs))
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = VariationalCategoricalHMM(init_params='', n_components=12, n_features=3,
n_iter=1,
random_state=RandomState(MT19937) at 0xFFFF78F8BB40)
X = array([[0],
[1],
[2],
[0],
[1],
[2],
[0],
[1],
[2],
[0]... [2],
[0],
[1],
[2],
[0],
[1],
[2],
[0],
[1],
[2]])
def _fit_log(self, X):
framelogprob = self._compute_subnorm_log_likelihood(X)
> logprob, fwdlattice = _hmmc.forward_log(
self.startprob_subnorm_, self.transmat_subnorm_, framelogprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:1091: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
_______ TestVariationalCategorical.test_fit_and_compare_with_em[scaling] _______
self = <hmmlearn.tests.test_variational_categorical.TestVariationalCategorical object at 0xffff7903de20>
implementation = 'scaling'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_fit_and_compare_with_em(self, implementation):
# Explicitly setting Random State to test that certain
# model states will become "unused"
sequences, lengths = self.get_from_one_beal(7, 100, 1984)
model = vhmm.VariationalCategoricalHMM(
4, n_iter=500, random_state=1984,
init_params="e",
implementation=implementation)
vi_uniform_startprob_and_transmat(model, lengths)
> model.fit(sequences, lengths)
hmmlearn/tests/test_variational_categorical.py:225:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/_emissions.py:27: in <lambda>
return functools.wraps(func)(lambda *args, **kwargs: func(*args, **kwargs))
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = VariationalCategoricalHMM(implementation='scaling', init_params='e',
n_components=4, n_features=3, n_iter=500,
random_state=1984)
X = array([[0],
[1],
[2],
[0],
[1],
[2],
[0],
[1],
[2],
[0]... [0],
[1],
[2],
[0],
[1],
[2],
[0],
[1],
[2],
[0]])
def _fit_scaling(self, X):
frameprob = self._compute_subnorm_likelihood(X)
> logprob, fwdlattice, scaling_factors = _hmmc.forward_scaling(
self.startprob_subnorm_, self.transmat_subnorm_, frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:1080: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
_________ TestVariationalCategorical.test_fit_and_compare_with_em[log] _________
self = <hmmlearn.tests.test_variational_categorical.TestVariationalCategorical object at 0xffff7903dfa0>
implementation = 'log'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_fit_and_compare_with_em(self, implementation):
# Explicitly setting Random State to test that certain
# model states will become "unused"
sequences, lengths = self.get_from_one_beal(7, 100, 1984)
model = vhmm.VariationalCategoricalHMM(
4, n_iter=500, random_state=1984,
init_params="e",
implementation=implementation)
vi_uniform_startprob_and_transmat(model, lengths)
> model.fit(sequences, lengths)
hmmlearn/tests/test_variational_categorical.py:225:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/_emissions.py:27: in <lambda>
return functools.wraps(func)(lambda *args, **kwargs: func(*args, **kwargs))
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = VariationalCategoricalHMM(init_params='e', n_components=4, n_features=3,
n_iter=500, random_state=1984)
X = array([[0],
[1],
[2],
[0],
[1],
[2],
[0],
[1],
[2],
[0]... [0],
[1],
[2],
[0],
[1],
[2],
[0],
[1],
[2],
[0]])
def _fit_log(self, X):
framelogprob = self._compute_subnorm_log_likelihood(X)
> logprob, fwdlattice = _hmmc.forward_log(
self.startprob_subnorm_, self.transmat_subnorm_, framelogprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:1091: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
_______ TestVariationalCategorical.test_fit_length_1_sequences[scaling] ________
self = <hmmlearn.tests.test_variational_categorical.TestVariationalCategorical object at 0xffff7903e180>
implementation = 'scaling'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_fit_length_1_sequences(self, implementation):
sequences1, lengths1 = self.get_from_one_beal(7, 100, 1984)
# Include some length 1 sequences
sequences2, lengths2 = self.get_from_one_beal(1, 1, 1984)
sequences = np.concatenate([sequences1, sequences2])
lengths = np.concatenate([lengths1, lengths2])
model = vhmm.VariationalCategoricalHMM(
4, n_iter=500, random_state=1984,
implementation=implementation)
> assert_log_likelihood_increasing(model, sequences, lengths, 10)
hmmlearn/tests/test_variational_categorical.py:255:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/tests/__init__.py:46: in assert_log_likelihood_increasing
h.fit(X, lengths=lengths)
hmmlearn/_emissions.py:27: in <lambda>
return functools.wraps(func)(lambda *args, **kwargs: func(*args, **kwargs))
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = VariationalCategoricalHMM(implementation='scaling', init_params='',
n_components=4, n_features=3, n_iter=1,
random_state=1984)
X = array([[0],
[1],
[2],
[0],
[1],
[2],
[0],
[1],
[2],
[0]... [0],
[1],
[2],
[0],
[1],
[2],
[0],
[1],
[2],
[0]])
def _fit_scaling(self, X):
frameprob = self._compute_subnorm_likelihood(X)
> logprob, fwdlattice, scaling_factors = _hmmc.forward_scaling(
self.startprob_subnorm_, self.transmat_subnorm_, frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:1080: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
_________ TestVariationalCategorical.test_fit_length_1_sequences[log] __________
self = <hmmlearn.tests.test_variational_categorical.TestVariationalCategorical object at 0xffff7903e300>
implementation = 'log'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_fit_length_1_sequences(self, implementation):
sequences1, lengths1 = self.get_from_one_beal(7, 100, 1984)
# Include some length 1 sequences
sequences2, lengths2 = self.get_from_one_beal(1, 1, 1984)
sequences = np.concatenate([sequences1, sequences2])
lengths = np.concatenate([lengths1, lengths2])
model = vhmm.VariationalCategoricalHMM(
4, n_iter=500, random_state=1984,
implementation=implementation)
> assert_log_likelihood_increasing(model, sequences, lengths, 10)
hmmlearn/tests/test_variational_categorical.py:255:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/tests/__init__.py:46: in assert_log_likelihood_increasing
h.fit(X, lengths=lengths)
hmmlearn/_emissions.py:27: in <lambda>
return functools.wraps(func)(lambda *args, **kwargs: func(*args, **kwargs))
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = VariationalCategoricalHMM(init_params='', n_components=4, n_features=3,
n_iter=1, random_state=1984)
X = array([[0],
[1],
[2],
[0],
[1],
[2],
[0],
[1],
[2],
[0]... [0],
[1],
[2],
[0],
[1],
[2],
[0],
[1],
[2],
[0]])
def _fit_log(self, X):
framelogprob = self._compute_subnorm_log_likelihood(X)
> logprob, fwdlattice = _hmmc.forward_log(
self.startprob_subnorm_, self.transmat_subnorm_, framelogprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:1091: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
______________________ TestFull.test_random_fit[scaling] _______________________
self = <hmmlearn.tests.test_variational_gaussian.TestFull object at 0xffff7903f500>
implementation = 'scaling', params = 'stmc', n_features = 3, n_components = 3
kwargs = {}
h = GaussianHMM(covariance_type='full', implementation='scaling', init_params='',
n_components=3)
rs = RandomState(MT19937) at 0xFFFF78E8AA40, lengths = [200, 200, 200, 200, 200]
X = array([[ -6.86811158, -15.5218548 , 2.57129256],
[ -4.58815074, -16.43758315, 3.29235714],
[ -6.5599..., 3.10129119],
[ -8.58810682, 5.49343563, 8.40750902],
[ -6.98040052, -16.12864527, 2.64082744]])
_state_sequence = array([1, 1, 1, 2, 2, 2, 2, 2, 2, 2, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 2, 0,
0, 0, 0, 0, 2, 0, 0, 0, 1, 2, 0, 1, 0,...0, 2, 1,
1, 0, 0, 1, 1, 2, 0, 0, 0, 0, 2, 2, 0, 2, 2, 0, 0, 0, 2, 1, 0, 1,
2, 1, 0, 2, 2, 2, 0, 1, 0, 1])
model = VariationalGaussianHMM(implementation='scaling', init_params='', n_components=3,
n_iter=1,
random_state=RandomState(MT19937) at 0xFFFF78E8AA40,
tol=1e-09)
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_random_fit(self, implementation, params='stmc', n_features=3,
n_components=3, **kwargs):
h = hmm.GaussianHMM(n_components, self.covariance_type,
implementation=implementation, init_params="")
rs = check_random_state(1)
h.startprob_ = normalized(rs.rand(n_components))
h.transmat_ = normalized(
rs.rand(n_components, n_components), axis=1)
h.means_ = rs.randint(-20, 20, (n_components, n_features))
h.covars_ = make_covar_matrix(
self.covariance_type, n_components, n_features, random_state=rs)
lengths = [200] * 5
X, _state_sequence = h.sample(sum(lengths), random_state=rs)
# Now learn a model
model = vhmm.VariationalGaussianHMM(
n_components, n_iter=50, tol=1e-9, random_state=rs,
covariance_type=self.covariance_type,
implementation=implementation)
> assert_log_likelihood_increasing(model, X, lengths, n_iter=10)
hmmlearn/tests/test_variational_gaussian.py:59:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/tests/__init__.py:46: in assert_log_likelihood_increasing
h.fit(X, lengths=lengths)
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = VariationalGaussianHMM(implementation='scaling', init_params='', n_components=3,
n_iter=1,
random_state=RandomState(MT19937) at 0xFFFF78E8AA40,
tol=1e-09)
X = array([[ -6.86811158, -15.5218548 , 2.57129256],
[ -4.58815074, -16.43758315, 3.29235714],
[ -6.5599..., 12.30542549],
[ 3.45864836, 9.93266313, 13.33197942],
[ 2.81248345, 8.96100579, 10.47967146]])
def _fit_scaling(self, X):
frameprob = self._compute_subnorm_likelihood(X)
> logprob, fwdlattice, scaling_factors = _hmmc.forward_scaling(
self.startprob_subnorm_, self.transmat_subnorm_, frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:1080: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
________________________ TestFull.test_random_fit[log] _________________________
self = <hmmlearn.tests.test_variational_gaussian.TestFull object at 0xffff7903ed80>
implementation = 'log', params = 'stmc', n_features = 3, n_components = 3
kwargs = {}
h = GaussianHMM(covariance_type='full', init_params='', n_components=3)
rs = RandomState(MT19937) at 0xFFFF78FA6C40, lengths = [200, 200, 200, 200, 200]
X = array([[ -6.86811158, -15.5218548 , 2.57129256],
[ -4.58815074, -16.43758315, 3.29235714],
[ -6.5599..., 3.10129119],
[ -8.58810682, 5.49343563, 8.40750902],
[ -6.98040052, -16.12864527, 2.64082744]])
_state_sequence = array([1, 1, 1, 2, 2, 2, 2, 2, 2, 2, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 2, 0,
0, 0, 0, 0, 2, 0, 0, 0, 1, 2, 0, 1, 0,...0, 2, 1,
1, 0, 0, 1, 1, 2, 0, 0, 0, 0, 2, 2, 0, 2, 2, 0, 0, 0, 2, 1, 0, 1,
2, 1, 0, 2, 2, 2, 0, 1, 0, 1])
model = VariationalGaussianHMM(init_params='', n_components=3, n_iter=1,
random_state=RandomState(MT19937) at 0xFFFF78FA6C40,
tol=1e-09)
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_random_fit(self, implementation, params='stmc', n_features=3,
n_components=3, **kwargs):
h = hmm.GaussianHMM(n_components, self.covariance_type,
implementation=implementation, init_params="")
rs = check_random_state(1)
h.startprob_ = normalized(rs.rand(n_components))
h.transmat_ = normalized(
rs.rand(n_components, n_components), axis=1)
h.means_ = rs.randint(-20, 20, (n_components, n_features))
h.covars_ = make_covar_matrix(
self.covariance_type, n_components, n_features, random_state=rs)
lengths = [200] * 5
X, _state_sequence = h.sample(sum(lengths), random_state=rs)
# Now learn a model
model = vhmm.VariationalGaussianHMM(
n_components, n_iter=50, tol=1e-9, random_state=rs,
covariance_type=self.covariance_type,
implementation=implementation)
> assert_log_likelihood_increasing(model, X, lengths, n_iter=10)
hmmlearn/tests/test_variational_gaussian.py:59:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/tests/__init__.py:46: in assert_log_likelihood_increasing
h.fit(X, lengths=lengths)
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = VariationalGaussianHMM(init_params='', n_components=3, n_iter=1,
random_state=RandomState(MT19937) at 0xFFFF78FA6C40,
tol=1e-09)
X = array([[ -6.86811158, -15.5218548 , 2.57129256],
[ -4.58815074, -16.43758315, 3.29235714],
[ -6.5599..., 12.30542549],
[ 3.45864836, 9.93266313, 13.33197942],
[ 2.81248345, 8.96100579, 10.47967146]])
def _fit_log(self, X):
framelogprob = self._compute_subnorm_log_likelihood(X)
> logprob, fwdlattice = _hmmc.forward_log(
self.startprob_subnorm_, self.transmat_subnorm_, framelogprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:1091: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
______________ TestFull.test_fit_mcgrory_titterington1d[scaling] _______________
self = <hmmlearn.tests.test_variational_gaussian.TestFull object at 0xffff7903ec60>
implementation = 'scaling'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_fit_mcgrory_titterington1d(self, implementation):
random_state = check_random_state(234234)
# Setup to assure convergence
sequences, lengths = get_sequences(500, 1,
model=get_mcgrory_titterington(),
rs=random_state)
model = vhmm.VariationalGaussianHMM(
5, n_iter=1000, tol=1e-9, random_state=random_state,
init_params="mc",
covariance_type=self.covariance_type,
implementation=implementation)
vi_uniform_startprob_and_transmat(model, lengths)
> model.fit(sequences, lengths)
hmmlearn/tests/test_variational_gaussian.py:75:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = VariationalGaussianHMM(implementation='scaling', init_params='mc',
n_components=5, n_iter=1000,
random_state=RandomState(MT19937) at 0xFFFF78E8B540,
tol=1e-09)
X = array([[-2.11177854],
[ 1.7446186 ],
[ 2.05590346],
[ 0.02955277],
[ 1.87276123],
[...087992],
[-0.78888704],
[ 3.01000758],
[-1.12831821],
[ 2.30176638],
[-0.90718994]])
def _fit_scaling(self, X):
frameprob = self._compute_subnorm_likelihood(X)
> logprob, fwdlattice, scaling_factors = _hmmc.forward_scaling(
self.startprob_subnorm_, self.transmat_subnorm_, frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:1080: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
________________ TestFull.test_fit_mcgrory_titterington1d[log] _________________
self = <hmmlearn.tests.test_variational_gaussian.TestFull object at 0xffff7903f080>
implementation = 'log'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_fit_mcgrory_titterington1d(self, implementation):
random_state = check_random_state(234234)
# Setup to assure convergence
sequences, lengths = get_sequences(500, 1,
model=get_mcgrory_titterington(),
rs=random_state)
model = vhmm.VariationalGaussianHMM(
5, n_iter=1000, tol=1e-9, random_state=random_state,
init_params="mc",
covariance_type=self.covariance_type,
implementation=implementation)
vi_uniform_startprob_and_transmat(model, lengths)
> model.fit(sequences, lengths)
hmmlearn/tests/test_variational_gaussian.py:75:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = VariationalGaussianHMM(init_params='mc', n_components=5, n_iter=1000,
random_state=RandomState(MT19937) at 0xFFFF78E88D40,
tol=1e-09)
X = array([[-2.11177854],
[ 1.7446186 ],
[ 2.05590346],
[ 0.02955277],
[ 1.87276123],
[...087992],
[-0.78888704],
[ 3.01000758],
[-1.12831821],
[ 2.30176638],
[-0.90718994]])
def _fit_log(self, X):
framelogprob = self._compute_subnorm_log_likelihood(X)
> logprob, fwdlattice = _hmmc.forward_log(
self.startprob_subnorm_, self.transmat_subnorm_, framelogprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:1091: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
_________________ TestFull.test_common_initialization[scaling] _________________
self = <hmmlearn.tests.test_variational_gaussian.TestFull object at 0xffff7903e5d0>
implementation = 'scaling'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_common_initialization(self, implementation):
sequences, lengths = get_sequences(50, 10,
model=get_mcgrory_titterington())
with pytest.raises(ValueError):
model = vhmm.VariationalGaussianHMM(
4, n_iter=500, tol=1e-9,
covariance_type="incorrect",
implementation=implementation)
assert_log_likelihood_increasing(model, sequences, lengths, 10)
with pytest.raises(ValueError):
model = vhmm.VariationalGaussianHMM(
4, n_iter=500, tol=1e-9,
covariance_type="incorrect",
init_params="",
implementation=implementation)
model.startprob_= np.asarray([.25, .25, .25, .25])
model.score(sequences, lengths)
# Manually setup means - should converge
model = vhmm.VariationalGaussianHMM(
4, n_iter=500, tol=1e-9, init_params="stc",
covariance_type=self.covariance_type,
implementation=implementation)
model.means_prior_ = [[1], [1], [1], [1]]
model.means_posterior_ = [[2], [1], [3], [4]]
model.beta_prior_ = [1, 1, 1, 1]
model.beta_posterior_ = [1, 1, 1, 1]
> assert_log_likelihood_increasing(model, sequences, lengths, 10)
hmmlearn/tests/test_variational_gaussian.py:123:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/tests/__init__.py:46: in assert_log_likelihood_increasing
h.fit(X, lengths=lengths)
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = VariationalGaussianHMM(implementation='scaling', init_params='', n_components=4,
n_iter=1, tol=1e-09)
X = array([[ 0.21535104],
[ 2.82985744],
[-0.97185779],
[ 2.89081593],
[-0.66290202],
[...644159],
[ 0.32126301],
[ 2.73373158],
[-0.48778415],
[ 3.2352048 ],
[-2.21829728]])
def _fit_scaling(self, X):
frameprob = self._compute_subnorm_likelihood(X)
> logprob, fwdlattice, scaling_factors = _hmmc.forward_scaling(
self.startprob_subnorm_, self.transmat_subnorm_, frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:1080: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
___________________ TestFull.test_common_initialization[log] ___________________
self = <hmmlearn.tests.test_variational_gaussian.TestFull object at 0xffff7903e7b0>
implementation = 'log'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_common_initialization(self, implementation):
sequences, lengths = get_sequences(50, 10,
model=get_mcgrory_titterington())
with pytest.raises(ValueError):
model = vhmm.VariationalGaussianHMM(
4, n_iter=500, tol=1e-9,
covariance_type="incorrect",
implementation=implementation)
assert_log_likelihood_increasing(model, sequences, lengths, 10)
with pytest.raises(ValueError):
model = vhmm.VariationalGaussianHMM(
4, n_iter=500, tol=1e-9,
covariance_type="incorrect",
init_params="",
implementation=implementation)
model.startprob_= np.asarray([.25, .25, .25, .25])
model.score(sequences, lengths)
# Manually setup means - should converge
model = vhmm.VariationalGaussianHMM(
4, n_iter=500, tol=1e-9, init_params="stc",
covariance_type=self.covariance_type,
implementation=implementation)
model.means_prior_ = [[1], [1], [1], [1]]
model.means_posterior_ = [[2], [1], [3], [4]]
model.beta_prior_ = [1, 1, 1, 1]
model.beta_posterior_ = [1, 1, 1, 1]
> assert_log_likelihood_increasing(model, sequences, lengths, 10)
hmmlearn/tests/test_variational_gaussian.py:123:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/tests/__init__.py:46: in assert_log_likelihood_increasing
h.fit(X, lengths=lengths)
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = VariationalGaussianHMM(init_params='', n_components=4, n_iter=1, tol=1e-09)
X = array([[-0.33240202],
[ 1.16575351],
[ 0.76708158],
[-0.16665794],
[-2.0417122 ],
[...612387],
[-1.47774877],
[ 1.99699008],
[ 3.9346355 ],
[-1.84294702],
[-2.14332482]])
def _fit_log(self, X):
framelogprob = self._compute_subnorm_log_likelihood(X)
> logprob, fwdlattice = _hmmc.forward_log(
self.startprob_subnorm_, self.transmat_subnorm_, framelogprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:1091: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
____________________ TestFull.test_initialization[scaling] _____________________
self = <hmmlearn.tests.test_variational_gaussian.TestFull object at 0xffff7903f3e0>
implementation = 'scaling'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_initialization(self, implementation):
random_state = check_random_state(234234)
sequences, lengths = get_sequences(
50, 10, model=get_mcgrory_titterington())
# dof's have wrong shape
with pytest.raises(ValueError):
model = self.new_for_init(implementation)
model.dof_prior_ = [1, 1, 1]
assert_log_likelihood_increasing(model, sequences, lengths, 10)
with pytest.raises(ValueError):
model = self.new_for_init(implementation)
model.dof_posterior_ = [1, 1, 1]
assert_log_likelihood_increasing(model, sequences, lengths, 10)
# scales's have wrong shape
with pytest.raises(ValueError):
model = self.new_for_init(implementation)
model.scale_prior_ = [[[2.]], [[2.]], [[2.]]]
assert_log_likelihood_increasing(model, sequences, lengths, 10)
with pytest.raises(ValueError):
model = self.new_for_init(implementation)
model.scale_posterior_ = [[2.]], [[2.]], [[2.]] # this is wrong
assert_log_likelihood_increasing(model, sequences, lengths, 10)
# Manually setup covariance
with pytest.raises(ValueError):
model = vhmm.VariationalGaussianHMM(
4, n_iter=500, tol=1e-9, init_params="stm",
covariance_type="incorrect",
implementation=implementation)
assert_log_likelihood_increasing(model, sequences, lengths, 10)
# Set priors correctly via params
model = vhmm.VariationalGaussianHMM(
4, n_iter=500, tol=1e-9, random_state=random_state,
covariance_type=self.covariance_type,
implementation=implementation,
means_prior=[[0.], [0.], [0.], [0.]],
beta_prior=[2., 2., 2., 2.],
dof_prior=[2., 2., 2., 2.],
scale_prior=[[[2.]], [[2.]], [[2.]], [[2.]]])
> assert_log_likelihood_increasing(model, sequences, lengths, 10)
hmmlearn/tests/test_variational_gaussian.py:233:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/tests/__init__.py:46: in assert_log_likelihood_increasing
h.fit(X, lengths=lengths)
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = VariationalGaussianHMM(beta_prior=[2.0, 2.0, 2.0, 2.0],
dof_prior=[2.0, 2.0, 2.0, 2.0], impleme...FFF7A9C6A40,
scale_prior=[[[2.0]], [[2.0]], [[2.0]], [[2.0]]],
tol=1e-09)
X = array([[-0.97620016],
[ 0.79725115],
[-0.27940365],
[ 3.32645134],
[-2.69876488],
[...774038],
[ 3.83803194],
[-1.46435466],
[ 2.95456941],
[-0.13443947],
[-0.96474541]])
def _fit_scaling(self, X):
frameprob = self._compute_subnorm_likelihood(X)
> logprob, fwdlattice, scaling_factors = _hmmc.forward_scaling(
self.startprob_subnorm_, self.transmat_subnorm_, frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:1080: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
______________________ TestFull.test_initialization[log] _______________________
self = <hmmlearn.tests.test_variational_gaussian.TestFull object at 0xffff7903f380>
implementation = 'log'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_initialization(self, implementation):
random_state = check_random_state(234234)
sequences, lengths = get_sequences(
50, 10, model=get_mcgrory_titterington())
# dof's have wrong shape
with pytest.raises(ValueError):
model = self.new_for_init(implementation)
model.dof_prior_ = [1, 1, 1]
assert_log_likelihood_increasing(model, sequences, lengths, 10)
with pytest.raises(ValueError):
model = self.new_for_init(implementation)
model.dof_posterior_ = [1, 1, 1]
assert_log_likelihood_increasing(model, sequences, lengths, 10)
# scales's have wrong shape
with pytest.raises(ValueError):
model = self.new_for_init(implementation)
model.scale_prior_ = [[[2.]], [[2.]], [[2.]]]
assert_log_likelihood_increasing(model, sequences, lengths, 10)
with pytest.raises(ValueError):
model = self.new_for_init(implementation)
model.scale_posterior_ = [[2.]], [[2.]], [[2.]] # this is wrong
assert_log_likelihood_increasing(model, sequences, lengths, 10)
# Manually setup covariance
with pytest.raises(ValueError):
model = vhmm.VariationalGaussianHMM(
4, n_iter=500, tol=1e-9, init_params="stm",
covariance_type="incorrect",
implementation=implementation)
assert_log_likelihood_increasing(model, sequences, lengths, 10)
# Set priors correctly via params
model = vhmm.VariationalGaussianHMM(
4, n_iter=500, tol=1e-9, random_state=random_state,
covariance_type=self.covariance_type,
implementation=implementation,
means_prior=[[0.], [0.], [0.], [0.]],
beta_prior=[2., 2., 2., 2.],
dof_prior=[2., 2., 2., 2.],
scale_prior=[[[2.]], [[2.]], [[2.]], [[2.]]])
> assert_log_likelihood_increasing(model, sequences, lengths, 10)
hmmlearn/tests/test_variational_gaussian.py:233:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/tests/__init__.py:46: in assert_log_likelihood_increasing
h.fit(X, lengths=lengths)
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = VariationalGaussianHMM(beta_prior=[2.0, 2.0, 2.0, 2.0],
dof_prior=[2.0, 2.0, 2.0, 2.0], init_pa...FFF78EA6F40,
scale_prior=[[[2.0]], [[2.0]], [[2.0]], [[2.0]]],
tol=1e-09)
X = array([[ 1.90962598],
[ 1.38857322],
[ 0.88432176],
[ 1.50437126],
[-1.37679708],
[...987493],
[ 1.1246179 ],
[-2.31770774],
[ 2.39814844],
[ 1.40856394],
[ 2.12694691]])
def _fit_log(self, X):
framelogprob = self._compute_subnorm_log_likelihood(X)
> logprob, fwdlattice = _hmmc.forward_log(
self.startprob_subnorm_, self.transmat_subnorm_, framelogprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:1091: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
______________________ TestTied.test_random_fit[scaling] _______________________
self = <hmmlearn.tests.test_variational_gaussian.TestTied object at 0xffff7903dd00>
implementation = 'scaling', params = 'stmc', n_features = 3, n_components = 3
kwargs = {}
h = GaussianHMM(covariance_type='tied', implementation='scaling', init_params='',
n_components=3)
rs = RandomState(MT19937) at 0xFFFF78EA6D40, lengths = [200, 200, 200, 200, 200]
X = array([[ -6.76809081, -17.57929881, 2.65993861],
[ 4.47790401, 10.95422031, 12.25009349],
[ -9.2822..., 2.91189727],
[ 1.47179701, 9.35583105, 10.30599288],
[ -4.00663682, -15.17296134, 2.9706196 ]])
_state_sequence = array([1, 2, 0, 0, 0, 1, 1, 0, 1, 1, 1, 2, 2, 2, 2, 2, 2, 2, 0, 0, 0, 0,
0, 0, 0, 0, 1, 1, 2, 0, 0, 0, 0, 0, 2,...0, 0, 0,
0, 0, 1, 0, 0, 0, 2, 1, 1, 0, 0, 1, 1, 2, 0, 0, 0, 0, 2, 2, 0, 2,
2, 0, 0, 0, 2, 1, 0, 1, 2, 1])
model = VariationalGaussianHMM(covariance_type='tied', implementation='scaling',
init_params='', n_comp...n_iter=1,
random_state=RandomState(MT19937) at 0xFFFF78EA6D40,
tol=1e-09)
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_random_fit(self, implementation, params='stmc', n_features=3,
n_components=3, **kwargs):
h = hmm.GaussianHMM(n_components, self.covariance_type,
implementation=implementation, init_params="")
rs = check_random_state(1)
h.startprob_ = normalized(rs.rand(n_components))
h.transmat_ = normalized(
rs.rand(n_components, n_components), axis=1)
h.means_ = rs.randint(-20, 20, (n_components, n_features))
h.covars_ = make_covar_matrix(
self.covariance_type, n_components, n_features, random_state=rs)
lengths = [200] * 5
X, _state_sequence = h.sample(sum(lengths), random_state=rs)
# Now learn a model
model = vhmm.VariationalGaussianHMM(
n_components, n_iter=50, tol=1e-9, random_state=rs,
covariance_type=self.covariance_type,
implementation=implementation)
> assert_log_likelihood_increasing(model, X, lengths, n_iter=10)
hmmlearn/tests/test_variational_gaussian.py:59:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/tests/__init__.py:46: in assert_log_likelihood_increasing
h.fit(X, lengths=lengths)
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = VariationalGaussianHMM(covariance_type='tied', implementation='scaling',
init_params='', n_comp...n_iter=1,
random_state=RandomState(MT19937) at 0xFFFF78EA6D40,
tol=1e-09)
X = array([[ -6.76809081, -17.57929881, 2.65993861],
[ 4.47790401, 10.95422031, 12.25009349],
[ -9.2822..., 8.29790309],
[ -7.45761904, 8.0443883 , 8.74775768],
[ -7.54100296, 7.27668055, 8.35765657]])
def _fit_scaling(self, X):
frameprob = self._compute_subnorm_likelihood(X)
> logprob, fwdlattice, scaling_factors = _hmmc.forward_scaling(
self.startprob_subnorm_, self.transmat_subnorm_, frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:1080: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
________________________ TestTied.test_random_fit[log] _________________________
self = <hmmlearn.tests.test_variational_gaussian.TestTied object at 0xffff7903e540>
implementation = 'log', params = 'stmc', n_features = 3, n_components = 3
kwargs = {}
h = GaussianHMM(covariance_type='tied', init_params='', n_components=3)
rs = RandomState(MT19937) at 0xFFFF78E88C40, lengths = [200, 200, 200, 200, 200]
X = array([[ -6.54597428, -14.48319166, 3.52814708],
[ 3.02773721, 8.66210382, 10.95226001],
[ -9.6765..., 1.73843505],
[ 3.90207131, 11.87153515, 12.46452122],
[ -6.04735701, -17.31754837, 1.46456652]])
_state_sequence = array([1, 2, 0, 0, 0, 1, 1, 0, 1, 1, 1, 2, 2, 2, 2, 2, 2, 2, 0, 0, 0, 0,
0, 0, 0, 0, 1, 1, 2, 0, 0, 0, 0, 0, 2,...0, 0, 0,
0, 0, 1, 0, 0, 0, 2, 1, 1, 0, 0, 1, 1, 2, 0, 0, 0, 0, 2, 2, 0, 2,
2, 0, 0, 0, 2, 1, 0, 1, 2, 1])
model = VariationalGaussianHMM(covariance_type='tied', init_params='', n_components=3,
n_iter=1,
random_state=RandomState(MT19937) at 0xFFFF78E88C40,
tol=1e-09)
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_random_fit(self, implementation, params='stmc', n_features=3,
n_components=3, **kwargs):
h = hmm.GaussianHMM(n_components, self.covariance_type,
implementation=implementation, init_params="")
rs = check_random_state(1)
h.startprob_ = normalized(rs.rand(n_components))
h.transmat_ = normalized(
rs.rand(n_components, n_components), axis=1)
h.means_ = rs.randint(-20, 20, (n_components, n_features))
h.covars_ = make_covar_matrix(
self.covariance_type, n_components, n_features, random_state=rs)
lengths = [200] * 5
X, _state_sequence = h.sample(sum(lengths), random_state=rs)
# Now learn a model
model = vhmm.VariationalGaussianHMM(
n_components, n_iter=50, tol=1e-9, random_state=rs,
covariance_type=self.covariance_type,
implementation=implementation)
> assert_log_likelihood_increasing(model, X, lengths, n_iter=10)
hmmlearn/tests/test_variational_gaussian.py:59:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/tests/__init__.py:46: in assert_log_likelihood_increasing
h.fit(X, lengths=lengths)
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = VariationalGaussianHMM(covariance_type='tied', init_params='', n_components=3,
n_iter=1,
random_state=RandomState(MT19937) at 0xFFFF78E88C40,
tol=1e-09)
X = array([[ -6.54597428, -14.48319166, 3.52814708],
[ 3.02773721, 8.66210382, 10.95226001],
[ -9.6765..., 10.28703698],
[ -9.27093832, 7.48888941, 7.75556056],
[ -9.50212106, 8.22396714, 7.70516698]])
def _fit_log(self, X):
framelogprob = self._compute_subnorm_log_likelihood(X)
> logprob, fwdlattice = _hmmc.forward_log(
self.startprob_subnorm_, self.transmat_subnorm_, framelogprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:1091: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
______________ TestTied.test_fit_mcgrory_titterington1d[scaling] _______________
self = <hmmlearn.tests.test_variational_gaussian.TestTied object at 0xffff7903fb90>
implementation = 'scaling'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_fit_mcgrory_titterington1d(self, implementation):
random_state = check_random_state(234234)
# Setup to assure convergence
sequences, lengths = get_sequences(500, 1,
model=get_mcgrory_titterington(),
rs=random_state)
model = vhmm.VariationalGaussianHMM(
5, n_iter=1000, tol=1e-9, random_state=random_state,
init_params="mc",
covariance_type=self.covariance_type,
implementation=implementation)
vi_uniform_startprob_and_transmat(model, lengths)
> model.fit(sequences, lengths)
hmmlearn/tests/test_variational_gaussian.py:75:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = VariationalGaussianHMM(covariance_type='tied', implementation='scaling',
init_params='mc', n_co...ter=1000,
random_state=RandomState(MT19937) at 0xFFFF78E89C40,
tol=1e-09)
X = array([[-2.11177854],
[ 1.7446186 ],
[ 2.05590346],
[ 0.02955277],
[ 1.87276123],
[...087992],
[-0.78888704],
[ 3.01000758],
[-1.12831821],
[ 2.30176638],
[-0.90718994]])
def _fit_scaling(self, X):
frameprob = self._compute_subnorm_likelihood(X)
> logprob, fwdlattice, scaling_factors = _hmmc.forward_scaling(
self.startprob_subnorm_, self.transmat_subnorm_, frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:1080: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
________________ TestTied.test_fit_mcgrory_titterington1d[log] _________________
self = <hmmlearn.tests.test_variational_gaussian.TestTied object at 0xffff790033b0>
implementation = 'log'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_fit_mcgrory_titterington1d(self, implementation):
random_state = check_random_state(234234)
# Setup to assure convergence
sequences, lengths = get_sequences(500, 1,
model=get_mcgrory_titterington(),
rs=random_state)
model = vhmm.VariationalGaussianHMM(
5, n_iter=1000, tol=1e-9, random_state=random_state,
init_params="mc",
covariance_type=self.covariance_type,
implementation=implementation)
vi_uniform_startprob_and_transmat(model, lengths)
> model.fit(sequences, lengths)
hmmlearn/tests/test_variational_gaussian.py:75:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = VariationalGaussianHMM(covariance_type='tied', init_params='mc', n_components=5,
n_iter=1000,
random_state=RandomState(MT19937) at 0xFFFF78E8B540,
tol=1e-09)
X = array([[-2.11177854],
[ 1.7446186 ],
[ 2.05590346],
[ 0.02955277],
[ 1.87276123],
[...087992],
[-0.78888704],
[ 3.01000758],
[-1.12831821],
[ 2.30176638],
[-0.90718994]])
def _fit_log(self, X):
framelogprob = self._compute_subnorm_log_likelihood(X)
> logprob, fwdlattice = _hmmc.forward_log(
self.startprob_subnorm_, self.transmat_subnorm_, framelogprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:1091: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
_________________ TestTied.test_common_initialization[scaling] _________________
self = <hmmlearn.tests.test_variational_gaussian.TestTied object at 0xffff79000890>
implementation = 'scaling'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_common_initialization(self, implementation):
sequences, lengths = get_sequences(50, 10,
model=get_mcgrory_titterington())
with pytest.raises(ValueError):
model = vhmm.VariationalGaussianHMM(
4, n_iter=500, tol=1e-9,
covariance_type="incorrect",
implementation=implementation)
assert_log_likelihood_increasing(model, sequences, lengths, 10)
with pytest.raises(ValueError):
model = vhmm.VariationalGaussianHMM(
4, n_iter=500, tol=1e-9,
covariance_type="incorrect",
init_params="",
implementation=implementation)
model.startprob_= np.asarray([.25, .25, .25, .25])
model.score(sequences, lengths)
# Manually setup means - should converge
model = vhmm.VariationalGaussianHMM(
4, n_iter=500, tol=1e-9, init_params="stc",
covariance_type=self.covariance_type,
implementation=implementation)
model.means_prior_ = [[1], [1], [1], [1]]
model.means_posterior_ = [[2], [1], [3], [4]]
model.beta_prior_ = [1, 1, 1, 1]
model.beta_posterior_ = [1, 1, 1, 1]
> assert_log_likelihood_increasing(model, sequences, lengths, 10)
hmmlearn/tests/test_variational_gaussian.py:123:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/tests/__init__.py:46: in assert_log_likelihood_increasing
h.fit(X, lengths=lengths)
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = VariationalGaussianHMM(covariance_type='tied', implementation='scaling',
init_params='', n_components=4, n_iter=1, tol=1e-09)
X = array([[ 3.02406044],
[ 0.15141778],
[ 0.44490074],
[ 0.92052631],
[-0.18359039],
[...156249],
[ 0.61494698],
[-2.27023399],
[ 2.64757888],
[-2.00572944],
[ 0.08367312]])
def _fit_scaling(self, X):
frameprob = self._compute_subnorm_likelihood(X)
> logprob, fwdlattice, scaling_factors = _hmmc.forward_scaling(
self.startprob_subnorm_, self.transmat_subnorm_, frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:1080: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
___________________ TestTied.test_common_initialization[log] ___________________
self = <hmmlearn.tests.test_variational_gaussian.TestTied object at 0xffff79000a10>
implementation = 'log'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_common_initialization(self, implementation):
sequences, lengths = get_sequences(50, 10,
model=get_mcgrory_titterington())
with pytest.raises(ValueError):
model = vhmm.VariationalGaussianHMM(
4, n_iter=500, tol=1e-9,
covariance_type="incorrect",
implementation=implementation)
assert_log_likelihood_increasing(model, sequences, lengths, 10)
with pytest.raises(ValueError):
model = vhmm.VariationalGaussianHMM(
4, n_iter=500, tol=1e-9,
covariance_type="incorrect",
init_params="",
implementation=implementation)
model.startprob_= np.asarray([.25, .25, .25, .25])
model.score(sequences, lengths)
# Manually setup means - should converge
model = vhmm.VariationalGaussianHMM(
4, n_iter=500, tol=1e-9, init_params="stc",
covariance_type=self.covariance_type,
implementation=implementation)
model.means_prior_ = [[1], [1], [1], [1]]
model.means_posterior_ = [[2], [1], [3], [4]]
model.beta_prior_ = [1, 1, 1, 1]
model.beta_posterior_ = [1, 1, 1, 1]
> assert_log_likelihood_increasing(model, sequences, lengths, 10)
hmmlearn/tests/test_variational_gaussian.py:123:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/tests/__init__.py:46: in assert_log_likelihood_increasing
h.fit(X, lengths=lengths)
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = VariationalGaussianHMM(covariance_type='tied', init_params='', n_components=4,
n_iter=1, tol=1e-09)
X = array([[-1.09489413],
[-0.12957722],
[-1.73146656],
[ 3.55253037],
[ 2.62945991],
[...229695],
[ 0.93327602],
[ 3.14435486],
[-2.68712136],
[-0.81984256],
[ 3.63942885]])
def _fit_log(self, X):
framelogprob = self._compute_subnorm_log_likelihood(X)
> logprob, fwdlattice = _hmmc.forward_log(
self.startprob_subnorm_, self.transmat_subnorm_, framelogprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:1091: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
____________________ TestTied.test_initialization[scaling] _____________________
self = <hmmlearn.tests.test_variational_gaussian.TestTied object at 0xffff7903ebd0>
implementation = 'scaling'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_initialization(self, implementation):
random_state = check_random_state(234234)
sequences, lengths = get_sequences(
50, 10, model=get_mcgrory_titterington())
# dof's have wrong shape
with pytest.raises(ValueError):
model = self.new_for_init(implementation)
model.dof_prior_ = [1]
assert_log_likelihood_increasing(model, sequences, lengths, 10)
with pytest.raises(ValueError):
model = self.new_for_init(implementation)
model.dof_posterior_ = [1]
assert_log_likelihood_increasing(model, sequences, lengths, 10)
# scales's have wrong shape
with pytest.raises(ValueError):
model = self.new_for_init(implementation)
model.scale_prior_ = [[[2]]]
assert_log_likelihood_increasing(model, sequences, lengths, 10)
with pytest.raises(ValueError):
model = self.new_for_init(implementation)
model.scale_posterior_ = [[[2.]], [[2.]], [[2.]]] # this is wrong
assert_log_likelihood_increasing(model, sequences, lengths, 10)
# Manually setup covariance
with pytest.raises(ValueError):
model = vhmm.VariationalGaussianHMM(
4, n_iter=500, tol=1e-9, init_params="stm",
covariance_type="incorrect",
implementation=implementation)
assert_log_likelihood_increasing(model, sequences, lengths, 10)
# Set priors correctly via params
model = vhmm.VariationalGaussianHMM(
4, n_iter=500, tol=1e-9, random_state=random_state,
covariance_type=self.covariance_type,
implementation=implementation,
means_prior=[[0.], [0.], [0.], [0.]],
beta_prior=[2., 2., 2., 2.],
dof_prior=2,
scale_prior=[[2]],
)
> assert_log_likelihood_increasing(model, sequences, lengths, 10)
hmmlearn/tests/test_variational_gaussian.py:318:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/tests/__init__.py:46: in assert_log_likelihood_increasing
h.fit(X, lengths=lengths)
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = VariationalGaussianHMM(beta_prior=[2.0, 2.0, 2.0, 2.0], covariance_type='tied',
dof_prior=2, im... random_state=RandomState(MT19937) at 0xFFFF78EA5D40,
scale_prior=[[2]], tol=1e-09)
X = array([[ 2.7343842 ],
[ 2.01508175],
[ 2.29638889],
[ 1.12585508],
[ 1.67279509],
[...808295],
[-0.79265056],
[-0.27745453],
[ 0.69004695],
[-0.23995418],
[-1.0133645 ]])
def _fit_scaling(self, X):
frameprob = self._compute_subnorm_likelihood(X)
> logprob, fwdlattice, scaling_factors = _hmmc.forward_scaling(
self.startprob_subnorm_, self.transmat_subnorm_, frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:1080: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
______________________ TestTied.test_initialization[log] _______________________
self = <hmmlearn.tests.test_variational_gaussian.TestTied object at 0xffff7903da30>
implementation = 'log'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_initialization(self, implementation):
random_state = check_random_state(234234)
sequences, lengths = get_sequences(
50, 10, model=get_mcgrory_titterington())
# dof's have wrong shape
with pytest.raises(ValueError):
model = self.new_for_init(implementation)
model.dof_prior_ = [1]
assert_log_likelihood_increasing(model, sequences, lengths, 10)
with pytest.raises(ValueError):
model = self.new_for_init(implementation)
model.dof_posterior_ = [1]
assert_log_likelihood_increasing(model, sequences, lengths, 10)
# scales's have wrong shape
with pytest.raises(ValueError):
model = self.new_for_init(implementation)
model.scale_prior_ = [[[2]]]
assert_log_likelihood_increasing(model, sequences, lengths, 10)
with pytest.raises(ValueError):
model = self.new_for_init(implementation)
model.scale_posterior_ = [[[2.]], [[2.]], [[2.]]] # this is wrong
assert_log_likelihood_increasing(model, sequences, lengths, 10)
# Manually setup covariance
with pytest.raises(ValueError):
model = vhmm.VariationalGaussianHMM(
4, n_iter=500, tol=1e-9, init_params="stm",
covariance_type="incorrect",
implementation=implementation)
assert_log_likelihood_increasing(model, sequences, lengths, 10)
# Set priors correctly via params
model = vhmm.VariationalGaussianHMM(
4, n_iter=500, tol=1e-9, random_state=random_state,
covariance_type=self.covariance_type,
implementation=implementation,
means_prior=[[0.], [0.], [0.], [0.]],
beta_prior=[2., 2., 2., 2.],
dof_prior=2,
scale_prior=[[2]],
)
> assert_log_likelihood_increasing(model, sequences, lengths, 10)
hmmlearn/tests/test_variational_gaussian.py:318:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/tests/__init__.py:46: in assert_log_likelihood_increasing
h.fit(X, lengths=lengths)
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = VariationalGaussianHMM(beta_prior=[2.0, 2.0, 2.0, 2.0], covariance_type='tied',
dof_prior=2, in... random_state=RandomState(MT19937) at 0xFFFF790B9B40,
scale_prior=[[2]], tol=1e-09)
X = array([[-1.51990156],
[-0.77421241],
[ 3.56219686],
[-1.64888838],
[ 2.6276434 ],
[...179403],
[-0.686967 ],
[ 1.27430623],
[-0.31739316],
[ 1.74639412],
[-2.01831639]])
def _fit_log(self, X):
framelogprob = self._compute_subnorm_log_likelihood(X)
> logprob, fwdlattice = _hmmc.forward_log(
self.startprob_subnorm_, self.transmat_subnorm_, framelogprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:1091: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
____________________ TestSpherical.test_random_fit[scaling] ____________________
self = <hmmlearn.tests.test_variational_gaussian.TestSpherical object at 0xffff79000f20>
implementation = 'scaling', params = 'stmc', n_features = 3, n_components = 3
kwargs = {}
h = GaussianHMM(covariance_type='spherical', implementation='scaling',
init_params='', n_components=3)
rs = RandomState(MT19937) at 0xFFFF78EA5540, lengths = [200, 200, 200, 200, 200]
X = array([[ -8.80112327, 8.00989019, 9.06698421],
[ -8.93310855, 8.03047065, 8.92124378],
[ -6.1530..., 8.83737591],
[ 2.84752765, 10.20119432, 12.21355309],
[ -8.94111272, 8.15425357, 8.74112105]])
_state_sequence = array([0, 0, 1, 0, 0, 0, 0, 1, 1, 1, 0, 0, 0, 1, 1, 0, 0, 0, 1, 2, 1, 1,
1, 1, 1, 2, 2, 2, 2, 0, 2, 0, 0, 0, 0,...2, 1, 1,
2, 2, 1, 1, 1, 2, 1, 2, 1, 0, 2, 2, 2, 1, 1, 1, 2, 0, 2, 2, 2, 2,
0, 2, 2, 2, 0, 0, 0, 0, 2, 0])
model = VariationalGaussianHMM(covariance_type='spherical', implementation='scaling',
init_params='', n...n_iter=1,
random_state=RandomState(MT19937) at 0xFFFF78EA5540,
tol=1e-09)
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_random_fit(self, implementation, params='stmc', n_features=3,
n_components=3, **kwargs):
h = hmm.GaussianHMM(n_components, self.covariance_type,
implementation=implementation, init_params="")
rs = check_random_state(1)
h.startprob_ = normalized(rs.rand(n_components))
h.transmat_ = normalized(
rs.rand(n_components, n_components), axis=1)
h.means_ = rs.randint(-20, 20, (n_components, n_features))
h.covars_ = make_covar_matrix(
self.covariance_type, n_components, n_features, random_state=rs)
lengths = [200] * 5
X, _state_sequence = h.sample(sum(lengths), random_state=rs)
# Now learn a model
model = vhmm.VariationalGaussianHMM(
n_components, n_iter=50, tol=1e-9, random_state=rs,
covariance_type=self.covariance_type,
implementation=implementation)
> assert_log_likelihood_increasing(model, X, lengths, n_iter=10)
hmmlearn/tests/test_variational_gaussian.py:59:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/tests/__init__.py:46: in assert_log_likelihood_increasing
h.fit(X, lengths=lengths)
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = VariationalGaussianHMM(covariance_type='spherical', implementation='scaling',
init_params='', n...n_iter=1,
random_state=RandomState(MT19937) at 0xFFFF78EA5540,
tol=1e-09)
X = array([[ -8.80112327, 8.00989019, 9.06698421],
[ -8.93310855, 8.03047065, 8.92124378],
[ -6.1530..., 2.96146404],
[ -5.67847522, -16.01739311, 2.72149483],
[ 3.0501041 , 10.1190271 , 11.98035801]])
def _fit_scaling(self, X):
frameprob = self._compute_subnorm_likelihood(X)
> logprob, fwdlattice, scaling_factors = _hmmc.forward_scaling(
self.startprob_subnorm_, self.transmat_subnorm_, frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:1080: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
______________________ TestSpherical.test_random_fit[log] ______________________
self = <hmmlearn.tests.test_variational_gaussian.TestSpherical object at 0xffff790010d0>
implementation = 'log', params = 'stmc', n_features = 3, n_components = 3
kwargs = {}
h = GaussianHMM(covariance_type='spherical', init_params='', n_components=3)
rs = RandomState(MT19937) at 0xFFFF78EA4440, lengths = [200, 200, 200, 200, 200]
X = array([[ -8.80112327, 8.00989019, 9.06698421],
[ -8.93310855, 8.03047065, 8.92124378],
[ -6.1530..., 8.83737591],
[ 2.84752765, 10.20119432, 12.21355309],
[ -8.94111272, 8.15425357, 8.74112105]])
_state_sequence = array([0, 0, 1, 0, 0, 0, 0, 1, 1, 1, 0, 0, 0, 1, 1, 0, 0, 0, 1, 2, 1, 1,
1, 1, 1, 2, 2, 2, 2, 0, 2, 0, 0, 0, 0,...2, 1, 1,
2, 2, 1, 1, 1, 2, 1, 2, 1, 0, 2, 2, 2, 1, 1, 1, 2, 0, 2, 2, 2, 2,
0, 2, 2, 2, 0, 0, 0, 0, 2, 0])
model = VariationalGaussianHMM(covariance_type='spherical', init_params='',
n_components=3, n_iter=1,
random_state=RandomState(MT19937) at 0xFFFF78EA4440,
tol=1e-09)
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_random_fit(self, implementation, params='stmc', n_features=3,
n_components=3, **kwargs):
h = hmm.GaussianHMM(n_components, self.covariance_type,
implementation=implementation, init_params="")
rs = check_random_state(1)
h.startprob_ = normalized(rs.rand(n_components))
h.transmat_ = normalized(
rs.rand(n_components, n_components), axis=1)
h.means_ = rs.randint(-20, 20, (n_components, n_features))
h.covars_ = make_covar_matrix(
self.covariance_type, n_components, n_features, random_state=rs)
lengths = [200] * 5
X, _state_sequence = h.sample(sum(lengths), random_state=rs)
# Now learn a model
model = vhmm.VariationalGaussianHMM(
n_components, n_iter=50, tol=1e-9, random_state=rs,
covariance_type=self.covariance_type,
implementation=implementation)
> assert_log_likelihood_increasing(model, X, lengths, n_iter=10)
hmmlearn/tests/test_variational_gaussian.py:59:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/tests/__init__.py:46: in assert_log_likelihood_increasing
h.fit(X, lengths=lengths)
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = VariationalGaussianHMM(covariance_type='spherical', init_params='',
n_components=3, n_iter=1,
random_state=RandomState(MT19937) at 0xFFFF78EA4440,
tol=1e-09)
X = array([[ -8.80112327, 8.00989019, 9.06698421],
[ -8.93310855, 8.03047065, 8.92124378],
[ -6.1530..., 2.96146404],
[ -5.67847522, -16.01739311, 2.72149483],
[ 3.0501041 , 10.1190271 , 11.98035801]])
def _fit_log(self, X):
framelogprob = self._compute_subnorm_log_likelihood(X)
> logprob, fwdlattice = _hmmc.forward_log(
self.startprob_subnorm_, self.transmat_subnorm_, framelogprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:1091: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
____________ TestSpherical.test_fit_mcgrory_titterington1d[scaling] ____________
self = <hmmlearn.tests.test_variational_gaussian.TestSpherical object at 0xffff79001280>
implementation = 'scaling'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_fit_mcgrory_titterington1d(self, implementation):
random_state = check_random_state(234234)
# Setup to assure convergence
sequences, lengths = get_sequences(500, 1,
model=get_mcgrory_titterington(),
rs=random_state)
model = vhmm.VariationalGaussianHMM(
5, n_iter=1000, tol=1e-9, random_state=random_state,
init_params="mc",
covariance_type=self.covariance_type,
implementation=implementation)
vi_uniform_startprob_and_transmat(model, lengths)
> model.fit(sequences, lengths)
hmmlearn/tests/test_variational_gaussian.py:75:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = VariationalGaussianHMM(covariance_type='spherical', implementation='scaling',
init_params='mc',...ter=1000,
random_state=RandomState(MT19937) at 0xFFFF78EA6140,
tol=1e-09)
X = array([[-2.11177854],
[ 1.7446186 ],
[ 2.05590346],
[ 0.02955277],
[ 1.87276123],
[...087992],
[-0.78888704],
[ 3.01000758],
[-1.12831821],
[ 2.30176638],
[-0.90718994]])
def _fit_scaling(self, X):
frameprob = self._compute_subnorm_likelihood(X)
> logprob, fwdlattice, scaling_factors = _hmmc.forward_scaling(
self.startprob_subnorm_, self.transmat_subnorm_, frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:1080: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
______________ TestSpherical.test_fit_mcgrory_titterington1d[log] ______________
self = <hmmlearn.tests.test_variational_gaussian.TestSpherical object at 0xffff79001400>
implementation = 'log'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_fit_mcgrory_titterington1d(self, implementation):
random_state = check_random_state(234234)
# Setup to assure convergence
sequences, lengths = get_sequences(500, 1,
model=get_mcgrory_titterington(),
rs=random_state)
model = vhmm.VariationalGaussianHMM(
5, n_iter=1000, tol=1e-9, random_state=random_state,
init_params="mc",
covariance_type=self.covariance_type,
implementation=implementation)
vi_uniform_startprob_and_transmat(model, lengths)
> model.fit(sequences, lengths)
hmmlearn/tests/test_variational_gaussian.py:75:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = VariationalGaussianHMM(covariance_type='spherical', init_params='mc',
n_components=5, n_iter=1000,
random_state=RandomState(MT19937) at 0xFFFF78EA5640,
tol=1e-09)
X = array([[-2.11177854],
[ 1.7446186 ],
[ 2.05590346],
[ 0.02955277],
[ 1.87276123],
[...087992],
[-0.78888704],
[ 3.01000758],
[-1.12831821],
[ 2.30176638],
[-0.90718994]])
def _fit_log(self, X):
framelogprob = self._compute_subnorm_log_likelihood(X)
> logprob, fwdlattice = _hmmc.forward_log(
self.startprob_subnorm_, self.transmat_subnorm_, framelogprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:1091: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
______________ TestSpherical.test_common_initialization[scaling] _______________
self = <hmmlearn.tests.test_variational_gaussian.TestSpherical object at 0xffff7903e690>
implementation = 'scaling'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_common_initialization(self, implementation):
sequences, lengths = get_sequences(50, 10,
model=get_mcgrory_titterington())
with pytest.raises(ValueError):
model = vhmm.VariationalGaussianHMM(
4, n_iter=500, tol=1e-9,
covariance_type="incorrect",
implementation=implementation)
assert_log_likelihood_increasing(model, sequences, lengths, 10)
with pytest.raises(ValueError):
model = vhmm.VariationalGaussianHMM(
4, n_iter=500, tol=1e-9,
covariance_type="incorrect",
init_params="",
implementation=implementation)
model.startprob_= np.asarray([.25, .25, .25, .25])
model.score(sequences, lengths)
# Manually setup means - should converge
model = vhmm.VariationalGaussianHMM(
4, n_iter=500, tol=1e-9, init_params="stc",
covariance_type=self.covariance_type,
implementation=implementation)
model.means_prior_ = [[1], [1], [1], [1]]
model.means_posterior_ = [[2], [1], [3], [4]]
model.beta_prior_ = [1, 1, 1, 1]
model.beta_posterior_ = [1, 1, 1, 1]
> assert_log_likelihood_increasing(model, sequences, lengths, 10)
hmmlearn/tests/test_variational_gaussian.py:123:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/tests/__init__.py:46: in assert_log_likelihood_increasing
h.fit(X, lengths=lengths)
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = VariationalGaussianHMM(covariance_type='spherical', implementation='scaling',
init_params='', n_components=4, n_iter=1, tol=1e-09)
X = array([[ 1.58581198],
[-1.43013571],
[ 3.50073686],
[-2.09080284],
[ 1.48390039],
[...711457],
[ 1.8787106 ],
[ 2.31673751],
[ 0.62417883],
[-2.57450891],
[ 0.51093669]])
def _fit_scaling(self, X):
frameprob = self._compute_subnorm_likelihood(X)
> logprob, fwdlattice, scaling_factors = _hmmc.forward_scaling(
self.startprob_subnorm_, self.transmat_subnorm_, frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:1080: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
________________ TestSpherical.test_common_initialization[log] _________________
self = <hmmlearn.tests.test_variational_gaussian.TestSpherical object at 0xffff7903e060>
implementation = 'log'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_common_initialization(self, implementation):
sequences, lengths = get_sequences(50, 10,
model=get_mcgrory_titterington())
with pytest.raises(ValueError):
model = vhmm.VariationalGaussianHMM(
4, n_iter=500, tol=1e-9,
covariance_type="incorrect",
implementation=implementation)
assert_log_likelihood_increasing(model, sequences, lengths, 10)
with pytest.raises(ValueError):
model = vhmm.VariationalGaussianHMM(
4, n_iter=500, tol=1e-9,
covariance_type="incorrect",
init_params="",
implementation=implementation)
model.startprob_= np.asarray([.25, .25, .25, .25])
model.score(sequences, lengths)
# Manually setup means - should converge
model = vhmm.VariationalGaussianHMM(
4, n_iter=500, tol=1e-9, init_params="stc",
covariance_type=self.covariance_type,
implementation=implementation)
model.means_prior_ = [[1], [1], [1], [1]]
model.means_posterior_ = [[2], [1], [3], [4]]
model.beta_prior_ = [1, 1, 1, 1]
model.beta_posterior_ = [1, 1, 1, 1]
> assert_log_likelihood_increasing(model, sequences, lengths, 10)
hmmlearn/tests/test_variational_gaussian.py:123:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/tests/__init__.py:46: in assert_log_likelihood_increasing
h.fit(X, lengths=lengths)
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = VariationalGaussianHMM(covariance_type='spherical', init_params='',
n_components=4, n_iter=1, tol=1e-09)
X = array([[ 2.55895004],
[ 1.9386079 ],
[-1.14441545],
[ 0.79939524],
[-0.84122716],
[...848896],
[-0.7355048 ],
[-1.27791075],
[-1.53171601],
[ 1.93602005],
[-1.20472876]])
def _fit_log(self, X):
framelogprob = self._compute_subnorm_log_likelihood(X)
> logprob, fwdlattice = _hmmc.forward_log(
self.startprob_subnorm_, self.transmat_subnorm_, framelogprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:1091: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
__________________ TestSpherical.test_initialization[scaling] __________________
self = <hmmlearn.tests.test_variational_gaussian.TestSpherical object at 0xffff79000c20>
implementation = 'scaling'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_initialization(self, implementation):
random_state = check_random_state(234234)
sequences, lengths = get_sequences(
50, 10, model=get_mcgrory_titterington())
# dof's have wrong shape
with pytest.raises(ValueError):
model = self.new_for_init(implementation)
model.dof_prior_ = [1, 1, 1]
assert_log_likelihood_increasing(model, sequences, lengths, 10)
with pytest.raises(ValueError):
model = self.new_for_init(implementation)
model.dof_posterior_ = [1, 1, 1]
assert_log_likelihood_increasing(model, sequences, lengths, 10)
# scales's have wrong shape
with pytest.raises(ValueError):
model = self.new_for_init(implementation)
model.scale_prior_ = [2, 2, 2]
assert_log_likelihood_increasing(model, sequences, lengths, 10)
with pytest.raises(ValueError):
model = self.new_for_init(implementation)
model.scale_posterior_ = [2, 2, 2] # this is wrong
assert_log_likelihood_increasing(model, sequences, lengths, 10)
# Manually setup covariance
with pytest.raises(ValueError):
model = vhmm.VariationalGaussianHMM(
4, n_iter=500, tol=1e-9, init_params="stm",
covariance_type="incorrect",
implementation=implementation)
assert_log_likelihood_increasing(model, sequences, lengths, 10)
# Set priors correctly via params
model = vhmm.VariationalGaussianHMM(
4, n_iter=500, tol=1e-9, random_state=random_state,
covariance_type=self.covariance_type,
implementation=implementation,
means_prior=[[0.], [0.], [0.], [0.]],
beta_prior=[2., 2., 2., 2.],
dof_prior=[2., 2., 2., 2.],
scale_prior=[2, 2, 2, 2],
)
> assert_log_likelihood_increasing(model, sequences, lengths, 10)
hmmlearn/tests/test_variational_gaussian.py:403:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/tests/__init__.py:46: in assert_log_likelihood_increasing
h.fit(X, lengths=lengths)
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = VariationalGaussianHMM(beta_prior=[2.0, 2.0, 2.0, 2.0],
covariance_type='spherical',
... random_state=RandomState(MT19937) at 0xFFFF78F88640,
scale_prior=[2, 2, 2, 2], tol=1e-09)
X = array([[-0.69995355],
[ 1.11732084],
[ 2.34671222],
[ 0.38667263],
[ 0.49315166],
[...586139],
[ 0.81443462],
[-1.66759168],
[ 3.14268492],
[ 3.76227287],
[ 0.80644186]])
def _fit_scaling(self, X):
frameprob = self._compute_subnorm_likelihood(X)
> logprob, fwdlattice, scaling_factors = _hmmc.forward_scaling(
self.startprob_subnorm_, self.transmat_subnorm_, frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:1080: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
____________________ TestSpherical.test_initialization[log] ____________________
self = <hmmlearn.tests.test_variational_gaussian.TestSpherical object at 0xffff79000da0>
implementation = 'log'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_initialization(self, implementation):
random_state = check_random_state(234234)
sequences, lengths = get_sequences(
50, 10, model=get_mcgrory_titterington())
# dof's have wrong shape
with pytest.raises(ValueError):
model = self.new_for_init(implementation)
model.dof_prior_ = [1, 1, 1]
assert_log_likelihood_increasing(model, sequences, lengths, 10)
with pytest.raises(ValueError):
model = self.new_for_init(implementation)
model.dof_posterior_ = [1, 1, 1]
assert_log_likelihood_increasing(model, sequences, lengths, 10)
# scales's have wrong shape
with pytest.raises(ValueError):
model = self.new_for_init(implementation)
model.scale_prior_ = [2, 2, 2]
assert_log_likelihood_increasing(model, sequences, lengths, 10)
with pytest.raises(ValueError):
model = self.new_for_init(implementation)
model.scale_posterior_ = [2, 2, 2] # this is wrong
assert_log_likelihood_increasing(model, sequences, lengths, 10)
# Manually setup covariance
with pytest.raises(ValueError):
model = vhmm.VariationalGaussianHMM(
4, n_iter=500, tol=1e-9, init_params="stm",
covariance_type="incorrect",
implementation=implementation)
assert_log_likelihood_increasing(model, sequences, lengths, 10)
# Set priors correctly via params
model = vhmm.VariationalGaussianHMM(
4, n_iter=500, tol=1e-9, random_state=random_state,
covariance_type=self.covariance_type,
implementation=implementation,
means_prior=[[0.], [0.], [0.], [0.]],
beta_prior=[2., 2., 2., 2.],
dof_prior=[2., 2., 2., 2.],
scale_prior=[2, 2, 2, 2],
)
> assert_log_likelihood_increasing(model, sequences, lengths, 10)
hmmlearn/tests/test_variational_gaussian.py:403:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/tests/__init__.py:46: in assert_log_likelihood_increasing
h.fit(X, lengths=lengths)
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = VariationalGaussianHMM(beta_prior=[2.0, 2.0, 2.0, 2.0],
covariance_type='spherical',
... random_state=RandomState(MT19937) at 0xFFFF790BA640,
scale_prior=[2, 2, 2, 2], tol=1e-09)
X = array([[ 3.45654067],
[-2.75120263],
[ 2.70685609],
[ 2.19256817],
[-0.71552539],
[...986977],
[-2.05296787],
[ 0.98484479],
[ 2.68913339],
[-0.30012857],
[ 3.23805001]])
def _fit_log(self, X):
framelogprob = self._compute_subnorm_log_likelihood(X)
> logprob, fwdlattice = _hmmc.forward_log(
self.startprob_subnorm_, self.transmat_subnorm_, framelogprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:1091: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
____________________ TestDiagonal.test_random_fit[scaling] _____________________
self = <hmmlearn.tests.test_variational_gaussian.TestDiagonal object at 0xffff790016a0>
implementation = 'scaling', params = 'stmc', n_features = 3, n_components = 3
kwargs = {}
h = GaussianHMM(implementation='scaling', init_params='', n_components=3)
rs = RandomState(MT19937) at 0xFFFF78EA6340, lengths = [200, 200, 200, 200, 200]
X = array([[ -8.69644052, 7.84695023, 9.16793735],
[ -9.13224583, 7.92499119, 9.31288597],
[ -5.8357..., 9.05969418],
[ 3.16991695, 9.72247605, 12.12314999],
[ 3.09806199, 9.95716109, 11.96433113]])
_state_sequence = array([0, 0, 1, 1, 0, 2, 2, 2, 2, 1, 1, 2, 1, 0, 0, 0, 1, 2, 1, 1, 1, 1,
1, 2, 2, 2, 2, 0, 2, 0, 0, 0, 0, 0, 0,...1, 2, 2,
1, 1, 1, 2, 1, 2, 1, 0, 2, 2, 2, 1, 1, 1, 2, 0, 2, 2, 2, 2, 0, 2,
2, 2, 0, 0, 0, 0, 2, 0, 2, 2])
model = VariationalGaussianHMM(covariance_type='diag', implementation='scaling',
init_params='', n_comp...n_iter=1,
random_state=RandomState(MT19937) at 0xFFFF78EA6340,
tol=1e-09)
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_random_fit(self, implementation, params='stmc', n_features=3,
n_components=3, **kwargs):
h = hmm.GaussianHMM(n_components, self.covariance_type,
implementation=implementation, init_params="")
rs = check_random_state(1)
h.startprob_ = normalized(rs.rand(n_components))
h.transmat_ = normalized(
rs.rand(n_components, n_components), axis=1)
h.means_ = rs.randint(-20, 20, (n_components, n_features))
h.covars_ = make_covar_matrix(
self.covariance_type, n_components, n_features, random_state=rs)
lengths = [200] * 5
X, _state_sequence = h.sample(sum(lengths), random_state=rs)
# Now learn a model
model = vhmm.VariationalGaussianHMM(
n_components, n_iter=50, tol=1e-9, random_state=rs,
covariance_type=self.covariance_type,
implementation=implementation)
> assert_log_likelihood_increasing(model, X, lengths, n_iter=10)
hmmlearn/tests/test_variational_gaussian.py:59:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/tests/__init__.py:46: in assert_log_likelihood_increasing
h.fit(X, lengths=lengths)
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = VariationalGaussianHMM(covariance_type='diag', implementation='scaling',
init_params='', n_comp...n_iter=1,
random_state=RandomState(MT19937) at 0xFFFF78EA6340,
tol=1e-09)
X = array([[ -8.69644052, 7.84695023, 9.16793735],
[ -9.13224583, 7.92499119, 9.31288597],
[ -5.8357..., 11.98612945],
[ 2.90646378, 9.9957161 , 11.98128432],
[ -8.65470261, 8.11543755, 8.85803583]])
def _fit_scaling(self, X):
frameprob = self._compute_subnorm_likelihood(X)
> logprob, fwdlattice, scaling_factors = _hmmc.forward_scaling(
self.startprob_subnorm_, self.transmat_subnorm_, frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:1080: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
______________________ TestDiagonal.test_random_fit[log] _______________________
self = <hmmlearn.tests.test_variational_gaussian.TestDiagonal object at 0xffff79001850>
implementation = 'log', params = 'stmc', n_features = 3, n_components = 3
kwargs = {}, h = GaussianHMM(init_params='', n_components=3)
rs = RandomState(MT19937) at 0xFFFF78EA5F40, lengths = [200, 200, 200, 200, 200]
X = array([[ -8.69644052, 7.84695023, 9.16793735],
[ -9.13224583, 7.92499119, 9.31288597],
[ -5.8357..., 9.05969418],
[ 3.16991695, 9.72247605, 12.12314999],
[ 3.09806199, 9.95716109, 11.96433113]])
_state_sequence = array([0, 0, 1, 1, 0, 2, 2, 2, 2, 1, 1, 2, 1, 0, 0, 0, 1, 2, 1, 1, 1, 1,
1, 2, 2, 2, 2, 0, 2, 0, 0, 0, 0, 0, 0,...1, 2, 2,
1, 1, 1, 2, 1, 2, 1, 0, 2, 2, 2, 1, 1, 1, 2, 0, 2, 2, 2, 2, 0, 2,
2, 2, 0, 0, 0, 0, 2, 0, 2, 2])
model = VariationalGaussianHMM(covariance_type='diag', init_params='', n_components=3,
n_iter=1,
random_state=RandomState(MT19937) at 0xFFFF78EA5F40,
tol=1e-09)
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_random_fit(self, implementation, params='stmc', n_features=3,
n_components=3, **kwargs):
h = hmm.GaussianHMM(n_components, self.covariance_type,
implementation=implementation, init_params="")
rs = check_random_state(1)
h.startprob_ = normalized(rs.rand(n_components))
h.transmat_ = normalized(
rs.rand(n_components, n_components), axis=1)
h.means_ = rs.randint(-20, 20, (n_components, n_features))
h.covars_ = make_covar_matrix(
self.covariance_type, n_components, n_features, random_state=rs)
lengths = [200] * 5
X, _state_sequence = h.sample(sum(lengths), random_state=rs)
# Now learn a model
model = vhmm.VariationalGaussianHMM(
n_components, n_iter=50, tol=1e-9, random_state=rs,
covariance_type=self.covariance_type,
implementation=implementation)
> assert_log_likelihood_increasing(model, X, lengths, n_iter=10)
hmmlearn/tests/test_variational_gaussian.py:59:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/tests/__init__.py:46: in assert_log_likelihood_increasing
h.fit(X, lengths=lengths)
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = VariationalGaussianHMM(covariance_type='diag', init_params='', n_components=3,
n_iter=1,
random_state=RandomState(MT19937) at 0xFFFF78EA5F40,
tol=1e-09)
X = array([[ -8.69644052, 7.84695023, 9.16793735],
[ -9.13224583, 7.92499119, 9.31288597],
[ -5.8357..., 11.98612945],
[ 2.90646378, 9.9957161 , 11.98128432],
[ -8.65470261, 8.11543755, 8.85803583]])
def _fit_log(self, X):
framelogprob = self._compute_subnorm_log_likelihood(X)
> logprob, fwdlattice = _hmmc.forward_log(
self.startprob_subnorm_, self.transmat_subnorm_, framelogprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:1091: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
____________ TestDiagonal.test_fit_mcgrory_titterington1d[scaling] _____________
self = <hmmlearn.tests.test_variational_gaussian.TestDiagonal object at 0xffff79001a00>
implementation = 'scaling'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_fit_mcgrory_titterington1d(self, implementation):
random_state = check_random_state(234234)
# Setup to assure convergence
sequences, lengths = get_sequences(500, 1,
model=get_mcgrory_titterington(),
rs=random_state)
model = vhmm.VariationalGaussianHMM(
5, n_iter=1000, tol=1e-9, random_state=random_state,
init_params="mc",
covariance_type=self.covariance_type,
implementation=implementation)
vi_uniform_startprob_and_transmat(model, lengths)
> model.fit(sequences, lengths)
hmmlearn/tests/test_variational_gaussian.py:75:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = VariationalGaussianHMM(covariance_type='diag', implementation='scaling',
init_params='mc', n_co...ter=1000,
random_state=RandomState(MT19937) at 0xFFFF78EA7140,
tol=1e-09)
X = array([[-2.11177854],
[ 1.7446186 ],
[ 2.05590346],
[ 0.02955277],
[ 1.87276123],
[...087992],
[-0.78888704],
[ 3.01000758],
[-1.12831821],
[ 2.30176638],
[-0.90718994]])
def _fit_scaling(self, X):
frameprob = self._compute_subnorm_likelihood(X)
> logprob, fwdlattice, scaling_factors = _hmmc.forward_scaling(
self.startprob_subnorm_, self.transmat_subnorm_, frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:1080: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
______________ TestDiagonal.test_fit_mcgrory_titterington1d[log] _______________
self = <hmmlearn.tests.test_variational_gaussian.TestDiagonal object at 0xffff79001b80>
implementation = 'log'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_fit_mcgrory_titterington1d(self, implementation):
random_state = check_random_state(234234)
# Setup to assure convergence
sequences, lengths = get_sequences(500, 1,
model=get_mcgrory_titterington(),
rs=random_state)
model = vhmm.VariationalGaussianHMM(
5, n_iter=1000, tol=1e-9, random_state=random_state,
init_params="mc",
covariance_type=self.covariance_type,
implementation=implementation)
vi_uniform_startprob_and_transmat(model, lengths)
> model.fit(sequences, lengths)
hmmlearn/tests/test_variational_gaussian.py:75:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = VariationalGaussianHMM(covariance_type='diag', init_params='mc', n_components=5,
n_iter=1000,
random_state=RandomState(MT19937) at 0xFFFF78E8BA40,
tol=1e-09)
X = array([[-2.11177854],
[ 1.7446186 ],
[ 2.05590346],
[ 0.02955277],
[ 1.87276123],
[...087992],
[-0.78888704],
[ 3.01000758],
[-1.12831821],
[ 2.30176638],
[-0.90718994]])
def _fit_log(self, X):
framelogprob = self._compute_subnorm_log_likelihood(X)
> logprob, fwdlattice = _hmmc.forward_log(
self.startprob_subnorm_, self.transmat_subnorm_, framelogprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:1091: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
_______________ TestDiagonal.test_common_initialization[scaling] _______________
self = <hmmlearn.tests.test_variational_gaussian.TestDiagonal object at 0xffff79001d60>
implementation = 'scaling'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_common_initialization(self, implementation):
sequences, lengths = get_sequences(50, 10,
model=get_mcgrory_titterington())
with pytest.raises(ValueError):
model = vhmm.VariationalGaussianHMM(
4, n_iter=500, tol=1e-9,
covariance_type="incorrect",
implementation=implementation)
assert_log_likelihood_increasing(model, sequences, lengths, 10)
with pytest.raises(ValueError):
model = vhmm.VariationalGaussianHMM(
4, n_iter=500, tol=1e-9,
covariance_type="incorrect",
init_params="",
implementation=implementation)
model.startprob_= np.asarray([.25, .25, .25, .25])
model.score(sequences, lengths)
# Manually setup means - should converge
model = vhmm.VariationalGaussianHMM(
4, n_iter=500, tol=1e-9, init_params="stc",
covariance_type=self.covariance_type,
implementation=implementation)
model.means_prior_ = [[1], [1], [1], [1]]
model.means_posterior_ = [[2], [1], [3], [4]]
model.beta_prior_ = [1, 1, 1, 1]
model.beta_posterior_ = [1, 1, 1, 1]
> assert_log_likelihood_increasing(model, sequences, lengths, 10)
hmmlearn/tests/test_variational_gaussian.py:123:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/tests/__init__.py:46: in assert_log_likelihood_increasing
h.fit(X, lengths=lengths)
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = VariationalGaussianHMM(covariance_type='diag', implementation='scaling',
init_params='', n_components=4, n_iter=1, tol=1e-09)
X = array([[ 2.94840979],
[-0.4236967 ],
[-1.86164101],
[-2.70760383],
[ 0.52817596],
[...614648],
[ 1.17327289],
[-0.48308756],
[-1.23521059],
[ 2.96221347],
[-2.4055287 ]])
def _fit_scaling(self, X):
frameprob = self._compute_subnorm_likelihood(X)
> logprob, fwdlattice, scaling_factors = _hmmc.forward_scaling(
self.startprob_subnorm_, self.transmat_subnorm_, frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:1080: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
_________________ TestDiagonal.test_common_initialization[log] _________________
self = <hmmlearn.tests.test_variational_gaussian.TestDiagonal object at 0xffff79001ee0>
implementation = 'log'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_common_initialization(self, implementation):
sequences, lengths = get_sequences(50, 10,
model=get_mcgrory_titterington())
with pytest.raises(ValueError):
model = vhmm.VariationalGaussianHMM(
4, n_iter=500, tol=1e-9,
covariance_type="incorrect",
implementation=implementation)
assert_log_likelihood_increasing(model, sequences, lengths, 10)
with pytest.raises(ValueError):
model = vhmm.VariationalGaussianHMM(
4, n_iter=500, tol=1e-9,
covariance_type="incorrect",
init_params="",
implementation=implementation)
model.startprob_= np.asarray([.25, .25, .25, .25])
model.score(sequences, lengths)
# Manually setup means - should converge
model = vhmm.VariationalGaussianHMM(
4, n_iter=500, tol=1e-9, init_params="stc",
covariance_type=self.covariance_type,
implementation=implementation)
model.means_prior_ = [[1], [1], [1], [1]]
model.means_posterior_ = [[2], [1], [3], [4]]
model.beta_prior_ = [1, 1, 1, 1]
model.beta_posterior_ = [1, 1, 1, 1]
> assert_log_likelihood_increasing(model, sequences, lengths, 10)
hmmlearn/tests/test_variational_gaussian.py:123:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/tests/__init__.py:46: in assert_log_likelihood_increasing
h.fit(X, lengths=lengths)
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = VariationalGaussianHMM(covariance_type='diag', init_params='', n_components=4,
n_iter=1, tol=1e-09)
X = array([[-1.00900958],
[ 1.83548612],
[-1.18687723],
[ 1.39357219],
[ 2.31529054],
[...120186],
[-0.59813352],
[ 1.09476375],
[ 2.7001891 ],
[ 0.25515909],
[-1.58409402]])
def _fit_log(self, X):
framelogprob = self._compute_subnorm_log_likelihood(X)
> logprob, fwdlattice = _hmmc.forward_log(
self.startprob_subnorm_, self.transmat_subnorm_, framelogprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:1091: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
__________________ TestDiagonal.test_initialization[scaling] ___________________
self = <hmmlearn.tests.test_variational_gaussian.TestDiagonal object at 0xffff7903f0e0>
implementation = 'scaling'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_initialization(self, implementation):
random_state = check_random_state(234234)
sequences, lengths = get_sequences(
50, 10, model=get_mcgrory_titterington())
# dof's have wrong shape
with pytest.raises(ValueError):
model = self.new_for_init(implementation)
model.dof_prior_ = [1, 1, 1]
assert_log_likelihood_increasing(model, sequences, lengths, 10)
with pytest.raises(ValueError):
model = self.new_for_init(implementation)
model.dof_posterior_ = [1, 1, 1]
assert_log_likelihood_increasing(model, sequences, lengths, 10)
# scales's have wrong shape
with pytest.raises(ValueError):
model = self.new_for_init(implementation)
model.scale_prior_ = [[2], [2], [2]]
assert_log_likelihood_increasing(model, sequences, lengths, 10)
with pytest.raises(ValueError):
model = vhmm.VariationalGaussianHMM(
4, n_iter=500, tol=1e-9, init_params="stm",
covariance_type=self.covariance_type,
implementation=implementation)
model.dof_prior_ = [1, 1, 1, 1]
model.dof_posterior_ = [1, 1, 1, 1]
model.scale_prior_ = [[2], [2], [2], [2]]
model.scale_posterior_ = [[2, 2, 2]] # this is wrong
assert_log_likelihood_increasing(model, sequences, lengths, 10)
# Set priors correctly via params
model = vhmm.VariationalGaussianHMM(
4, n_iter=500, tol=1e-9, random_state=random_state,
covariance_type=self.covariance_type,
implementation=implementation,
means_prior=[[0.], [0.], [0.], [0.]],
beta_prior=[2., 2., 2., 2.],
dof_prior=[2., 2., 2., 2.],
scale_prior=[[2], [2], [2], [2]]
)
> assert_log_likelihood_increasing(model, sequences, lengths, 10)
hmmlearn/tests/test_variational_gaussian.py:486:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/tests/__init__.py:46: in assert_log_likelihood_increasing
h.fit(X, lengths=lengths)
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = VariationalGaussianHMM(beta_prior=[2.0, 2.0, 2.0, 2.0], covariance_type='diag',
dof_prior=[2.0,...andom_state=RandomState(MT19937) at 0xFFFF790BA640,
scale_prior=[[2], [2], [2], [2]], tol=1e-09)
X = array([[ 0.37725899],
[ 3.11738285],
[-0.09163979],
[ 1.69939899],
[ 1.17211122],
[...975532],
[-1.29219785],
[-2.21400016],
[-0.12401679],
[ 3.5650227 ],
[-0.33847644]])
def _fit_scaling(self, X):
frameprob = self._compute_subnorm_likelihood(X)
> logprob, fwdlattice, scaling_factors = _hmmc.forward_scaling(
self.startprob_subnorm_, self.transmat_subnorm_, frameprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:1080: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
____________________ TestDiagonal.test_initialization[log] _____________________
self = <hmmlearn.tests.test_variational_gaussian.TestDiagonal object at 0xffff79000bc0>
implementation = 'log'
@pytest.mark.parametrize("implementation", ["scaling", "log"])
def test_initialization(self, implementation):
random_state = check_random_state(234234)
sequences, lengths = get_sequences(
50, 10, model=get_mcgrory_titterington())
# dof's have wrong shape
with pytest.raises(ValueError):
model = self.new_for_init(implementation)
model.dof_prior_ = [1, 1, 1]
assert_log_likelihood_increasing(model, sequences, lengths, 10)
with pytest.raises(ValueError):
model = self.new_for_init(implementation)
model.dof_posterior_ = [1, 1, 1]
assert_log_likelihood_increasing(model, sequences, lengths, 10)
# scales's have wrong shape
with pytest.raises(ValueError):
model = self.new_for_init(implementation)
model.scale_prior_ = [[2], [2], [2]]
assert_log_likelihood_increasing(model, sequences, lengths, 10)
with pytest.raises(ValueError):
model = vhmm.VariationalGaussianHMM(
4, n_iter=500, tol=1e-9, init_params="stm",
covariance_type=self.covariance_type,
implementation=implementation)
model.dof_prior_ = [1, 1, 1, 1]
model.dof_posterior_ = [1, 1, 1, 1]
model.scale_prior_ = [[2], [2], [2], [2]]
model.scale_posterior_ = [[2, 2, 2]] # this is wrong
assert_log_likelihood_increasing(model, sequences, lengths, 10)
# Set priors correctly via params
model = vhmm.VariationalGaussianHMM(
4, n_iter=500, tol=1e-9, random_state=random_state,
covariance_type=self.covariance_type,
implementation=implementation,
means_prior=[[0.], [0.], [0.], [0.]],
beta_prior=[2., 2., 2., 2.],
dof_prior=[2., 2., 2., 2.],
scale_prior=[[2], [2], [2], [2]]
)
> assert_log_likelihood_increasing(model, sequences, lengths, 10)
hmmlearn/tests/test_variational_gaussian.py:486:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hmmlearn/tests/__init__.py:46: in assert_log_likelihood_increasing
h.fit(X, lengths=lengths)
hmmlearn/base.py:473: in fit
stats, curr_logprob = self._do_estep(X, lengths)
hmmlearn/base.py:750: in _do_estep
lattice, logprob, posteriors, fwdlattice, bwdlattice = impl(sub_X)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = VariationalGaussianHMM(beta_prior=[2.0, 2.0, 2.0, 2.0], covariance_type='diag',
dof_prior=[2.0,...andom_state=RandomState(MT19937) at 0xFFFF78EA6840,
scale_prior=[[2], [2], [2], [2]], tol=1e-09)
X = array([[-1.50974603],
[ 0.66501942],
[ 1.03376567],
[-0.33821964],
[-0.03369866],
[...945696],
[ 1.03948035],
[ 3.29548267],
[-1.67415189],
[-0.95330419],
[ 2.79920426]])
def _fit_log(self, X):
framelogprob = self._compute_subnorm_log_likelihood(X)
> logprob, fwdlattice = _hmmc.forward_log(
self.startprob_subnorm_, self.transmat_subnorm_, framelogprob)
E RuntimeError: pybind11::handle::inc_ref() PyGILState_Check() failure.
hmmlearn/base.py:1091: RuntimeError
----------------------------- Captured stderr call -----------------------------
pybind11::handle::inc_ref() is being called while the GIL is either not held or invalid. Please see https://pybind11.readthedocs.io/en/stable/advanced/misc.html#common-sources-of-global-interpreter-lock-errors for debugging advice.
If you are convinced there is no bug in your code, you can #define PYBIND11_NO_ASSERT_GIL_HELD_INCREF_DECREF to disable this check. In that case you have to ensure this #define is consistently used for all translation units linked into a given pybind11 extension, otherwise there will be ODR violations. The failing pybind11::handle::inc_ref() call was triggered on a numpy.ndarray object.
=============================== warnings summary ===============================
.pybuild/cpython3_3.12_hmmlearn/build/hmmlearn/tests/test_variational_categorical.py: 9 warnings
.pybuild/cpython3_3.12_hmmlearn/build/hmmlearn/tests/test_variational_gaussian.py: 15 warnings
/<<PKGBUILDDIR>>/.pybuild/cpython3_3.12_hmmlearn/build/hmmlearn/base.py:1192: RuntimeWarning: underflow encountered in exp
self.startprob_subnorm_ = np.exp(startprob_log_subnorm)
.pybuild/cpython3_3.12_hmmlearn/build/hmmlearn/tests/test_variational_categorical.py: 7 warnings
.pybuild/cpython3_3.12_hmmlearn/build/hmmlearn/tests/test_variational_gaussian.py: 13 warnings
/<<PKGBUILDDIR>>/.pybuild/cpython3_3.12_hmmlearn/build/hmmlearn/base.py:1197: RuntimeWarning: underflow encountered in exp
self.transmat_subnorm_ = np.exp(transmat_log_subnorm)
.pybuild/cpython3_3.12_hmmlearn/build/hmmlearn/tests/test_variational_categorical.py::TestVariationalCategorical::test_fit_beal[scaling]
/<<PKGBUILDDIR>>/.pybuild/cpython3_3.12_hmmlearn/build/hmmlearn/base.py:1130: RuntimeWarning: underflow encountered in exp
return np.exp(self._compute_subnorm_log_likelihood(X))
-- Docs: https://docs.pytest.org/en/stable/how-to/capture-warnings.html
=========================== short test summary info ============================
FAILED hmmlearn/tests/test_base.py::TestBaseAgainstWikipedia::test_do_forward_scaling_pass
FAILED hmmlearn/tests/test_base.py::TestBaseAgainstWikipedia::test_do_forward_pass
FAILED hmmlearn/tests/test_base.py::TestBaseAgainstWikipedia::test_do_backward_scaling_pass
FAILED hmmlearn/tests/test_base.py::TestBaseAgainstWikipedia::test_do_viterbi_pass
FAILED hmmlearn/tests/test_base.py::TestBaseAgainstWikipedia::test_score_samples
FAILED hmmlearn/tests/test_base.py::TestBaseConsistentWithGMM::test_score_samples
FAILED hmmlearn/tests/test_base.py::TestBaseConsistentWithGMM::test_decode - ...
FAILED hmmlearn/tests/test_categorical_hmm.py::TestCategoricalAgainstWikipedia::test_decode_viterbi[scaling]
FAILED hmmlearn/tests/test_categorical_hmm.py::TestCategoricalAgainstWikipedia::test_decode_viterbi[log]
FAILED hmmlearn/tests/test_categorical_hmm.py::TestCategoricalAgainstWikipedia::test_decode_map[scaling]
FAILED hmmlearn/tests/test_categorical_hmm.py::TestCategoricalAgainstWikipedia::test_decode_map[log]
FAILED hmmlearn/tests/test_categorical_hmm.py::TestCategoricalAgainstWikipedia::test_predict[scaling]
FAILED hmmlearn/tests/test_categorical_hmm.py::TestCategoricalAgainstWikipedia::test_predict[log]
FAILED hmmlearn/tests/test_categorical_hmm.py::TestCategoricalHMM::test_n_features[scaling]
FAILED hmmlearn/tests/test_categorical_hmm.py::TestCategoricalHMM::test_n_features[log]
FAILED hmmlearn/tests/test_categorical_hmm.py::TestCategoricalHMM::test_score_samples[scaling]
FAILED hmmlearn/tests/test_categorical_hmm.py::TestCategoricalHMM::test_score_samples[log]
FAILED hmmlearn/tests/test_categorical_hmm.py::TestCategoricalHMM::test_fit[scaling]
FAILED hmmlearn/tests/test_categorical_hmm.py::TestCategoricalHMM::test_fit[log]
FAILED hmmlearn/tests/test_categorical_hmm.py::TestCategoricalHMM::test_fit_emissionprob[scaling]
FAILED hmmlearn/tests/test_categorical_hmm.py::TestCategoricalHMM::test_fit_emissionprob[log]
FAILED hmmlearn/tests/test_categorical_hmm.py::TestCategoricalHMM::test_fit_with_init[scaling]
FAILED hmmlearn/tests/test_categorical_hmm.py::TestCategoricalHMM::test_fit_with_init[log]
FAILED hmmlearn/tests/test_gaussian_hmm.py::TestGaussianHMMWithSphericalCovars::test_score_samples_and_decode[scaling]
FAILED hmmlearn/tests/test_gaussian_hmm.py::TestGaussianHMMWithSphericalCovars::test_score_samples_and_decode[log]
FAILED hmmlearn/tests/test_gaussian_hmm.py::TestGaussianHMMWithSphericalCovars::test_fit[scaling]
FAILED hmmlearn/tests/test_gaussian_hmm.py::TestGaussianHMMWithSphericalCovars::test_fit[log]
FAILED hmmlearn/tests/test_gaussian_hmm.py::TestGaussianHMMWithSphericalCovars::test_criterion[scaling]
FAILED hmmlearn/tests/test_gaussian_hmm.py::TestGaussianHMMWithSphericalCovars::test_criterion[log]
FAILED hmmlearn/tests/test_gaussian_hmm.py::TestGaussianHMMWithSphericalCovars::test_fit_ignored_init_warns[scaling]
FAILED hmmlearn/tests/test_gaussian_hmm.py::TestGaussianHMMWithSphericalCovars::test_fit_ignored_init_warns[log]
FAILED hmmlearn/tests/test_gaussian_hmm.py::TestGaussianHMMWithSphericalCovars::test_fit_sequences_of_different_length[scaling]
FAILED hmmlearn/tests/test_gaussian_hmm.py::TestGaussianHMMWithSphericalCovars::test_fit_sequences_of_different_length[log]
FAILED hmmlearn/tests/test_gaussian_hmm.py::TestGaussianHMMWithSphericalCovars::test_fit_with_length_one_signal[scaling]
FAILED hmmlearn/tests/test_gaussian_hmm.py::TestGaussianHMMWithSphericalCovars::test_fit_with_length_one_signal[log]
FAILED hmmlearn/tests/test_gaussian_hmm.py::TestGaussianHMMWithSphericalCovars::test_fit_zero_variance[scaling]
FAILED hmmlearn/tests/test_gaussian_hmm.py::TestGaussianHMMWithSphericalCovars::test_fit_zero_variance[log]
FAILED hmmlearn/tests/test_gaussian_hmm.py::TestGaussianHMMWithSphericalCovars::test_fit_with_priors[scaling]
FAILED hmmlearn/tests/test_gaussian_hmm.py::TestGaussianHMMWithSphericalCovars::test_fit_with_priors[log]
FAILED hmmlearn/tests/test_gaussian_hmm.py::TestGaussianHMMWithSphericalCovars::test_fit_startprob_and_transmat[scaling]
FAILED hmmlearn/tests/test_gaussian_hmm.py::TestGaussianHMMWithSphericalCovars::test_fit_startprob_and_transmat[log]
FAILED hmmlearn/tests/test_gaussian_hmm.py::TestGaussianHMMWithSphericalCovars::test_underflow_from_scaling[log]
FAILED hmmlearn/tests/test_gaussian_hmm.py::TestGaussianHMMWithDiagonalCovars::test_score_samples_and_decode[scaling]
FAILED hmmlearn/tests/test_gaussian_hmm.py::TestGaussianHMMWithDiagonalCovars::test_score_samples_and_decode[log]
FAILED hmmlearn/tests/test_gaussian_hmm.py::TestGaussianHMMWithDiagonalCovars::test_fit[scaling]
FAILED hmmlearn/tests/test_gaussian_hmm.py::TestGaussianHMMWithDiagonalCovars::test_fit[log]
FAILED hmmlearn/tests/test_gaussian_hmm.py::TestGaussianHMMWithDiagonalCovars::test_criterion[scaling]
FAILED hmmlearn/tests/test_gaussian_hmm.py::TestGaussianHMMWithDiagonalCovars::test_criterion[log]
FAILED hmmlearn/tests/test_gaussian_hmm.py::TestGaussianHMMWithDiagonalCovars::test_fit_ignored_init_warns[scaling]
FAILED hmmlearn/tests/test_gaussian_hmm.py::TestGaussianHMMWithDiagonalCovars::test_fit_ignored_init_warns[log]
FAILED hmmlearn/tests/test_gaussian_hmm.py::TestGaussianHMMWithDiagonalCovars::test_fit_sequences_of_different_length[scaling]
FAILED hmmlearn/tests/test_gaussian_hmm.py::TestGaussianHMMWithDiagonalCovars::test_fit_sequences_of_different_length[log]
FAILED hmmlearn/tests/test_gaussian_hmm.py::TestGaussianHMMWithDiagonalCovars::test_fit_with_length_one_signal[scaling]
FAILED hmmlearn/tests/test_gaussian_hmm.py::TestGaussianHMMWithDiagonalCovars::test_fit_with_length_one_signal[log]
FAILED hmmlearn/tests/test_gaussian_hmm.py::TestGaussianHMMWithDiagonalCovars::test_fit_zero_variance[scaling]
FAILED hmmlearn/tests/test_gaussian_hmm.py::TestGaussianHMMWithDiagonalCovars::test_fit_zero_variance[log]
FAILED hmmlearn/tests/test_gaussian_hmm.py::TestGaussianHMMWithDiagonalCovars::test_fit_with_priors[scaling]
FAILED hmmlearn/tests/test_gaussian_hmm.py::TestGaussianHMMWithDiagonalCovars::test_fit_with_priors[log]
FAILED hmmlearn/tests/test_gaussian_hmm.py::TestGaussianHMMWithDiagonalCovars::test_fit_left_right[scaling]
FAILED hmmlearn/tests/test_gaussian_hmm.py::TestGaussianHMMWithDiagonalCovars::test_fit_left_right[log]
FAILED hmmlearn/tests/test_gaussian_hmm.py::TestGaussianHMMWithTiedCovars::test_score_samples_and_decode[scaling]
FAILED hmmlearn/tests/test_gaussian_hmm.py::TestGaussianHMMWithTiedCovars::test_score_samples_and_decode[log]
FAILED hmmlearn/tests/test_gaussian_hmm.py::TestGaussianHMMWithTiedCovars::test_fit[scaling]
FAILED hmmlearn/tests/test_gaussian_hmm.py::TestGaussianHMMWithTiedCovars::test_fit[log]
FAILED hmmlearn/tests/test_gaussian_hmm.py::TestGaussianHMMWithTiedCovars::test_criterion[scaling]
FAILED hmmlearn/tests/test_gaussian_hmm.py::TestGaussianHMMWithTiedCovars::test_criterion[log]
FAILED hmmlearn/tests/test_gaussian_hmm.py::TestGaussianHMMWithTiedCovars::test_fit_ignored_init_warns[scaling]
FAILED hmmlearn/tests/test_gaussian_hmm.py::TestGaussianHMMWithTiedCovars::test_fit_ignored_init_warns[log]
FAILED hmmlearn/tests/test_gaussian_hmm.py::TestGaussianHMMWithTiedCovars::test_fit_sequences_of_different_length[scaling]
FAILED hmmlearn/tests/test_gaussian_hmm.py::TestGaussianHMMWithTiedCovars::test_fit_sequences_of_different_length[log]
FAILED hmmlearn/tests/test_gaussian_hmm.py::TestGaussianHMMWithTiedCovars::test_fit_with_length_one_signal[scaling]
FAILED hmmlearn/tests/test_gaussian_hmm.py::TestGaussianHMMWithTiedCovars::test_fit_with_length_one_signal[log]
FAILED hmmlearn/tests/test_gaussian_hmm.py::TestGaussianHMMWithTiedCovars::test_fit_zero_variance[scaling]
FAILED hmmlearn/tests/test_gaussian_hmm.py::TestGaussianHMMWithTiedCovars::test_fit_zero_variance[log]
FAILED hmmlearn/tests/test_gaussian_hmm.py::TestGaussianHMMWithTiedCovars::test_fit_with_priors[scaling]
FAILED hmmlearn/tests/test_gaussian_hmm.py::TestGaussianHMMWithTiedCovars::test_fit_with_priors[log]
FAILED hmmlearn/tests/test_gaussian_hmm.py::TestGaussianHMMWithFullCovars::test_score_samples_and_decode[scaling]
FAILED hmmlearn/tests/test_gaussian_hmm.py::TestGaussianHMMWithFullCovars::test_score_samples_and_decode[log]
FAILED hmmlearn/tests/test_gaussian_hmm.py::TestGaussianHMMWithFullCovars::test_fit[scaling]
FAILED hmmlearn/tests/test_gaussian_hmm.py::TestGaussianHMMWithFullCovars::test_fit[log]
FAILED hmmlearn/tests/test_gaussian_hmm.py::TestGaussianHMMWithFullCovars::test_criterion[scaling]
FAILED hmmlearn/tests/test_gaussian_hmm.py::TestGaussianHMMWithFullCovars::test_criterion[log]
FAILED hmmlearn/tests/test_gaussian_hmm.py::TestGaussianHMMWithFullCovars::test_fit_ignored_init_warns[scaling]
FAILED hmmlearn/tests/test_gaussian_hmm.py::TestGaussianHMMWithFullCovars::test_fit_ignored_init_warns[log]
FAILED hmmlearn/tests/test_gaussian_hmm.py::TestGaussianHMMWithFullCovars::test_fit_sequences_of_different_length[scaling]
FAILED hmmlearn/tests/test_gaussian_hmm.py::TestGaussianHMMWithFullCovars::test_fit_sequences_of_different_length[log]
FAILED hmmlearn/tests/test_gaussian_hmm.py::TestGaussianHMMWithFullCovars::test_fit_with_length_one_signal[scaling]
FAILED hmmlearn/tests/test_gaussian_hmm.py::TestGaussianHMMWithFullCovars::test_fit_with_length_one_signal[log]
FAILED hmmlearn/tests/test_gaussian_hmm.py::TestGaussianHMMWithFullCovars::test_fit_zero_variance[scaling]
FAILED hmmlearn/tests/test_gaussian_hmm.py::TestGaussianHMMWithFullCovars::test_fit_zero_variance[log]
FAILED hmmlearn/tests/test_gaussian_hmm.py::TestGaussianHMMWithFullCovars::test_fit_with_priors[scaling]
FAILED hmmlearn/tests/test_gaussian_hmm.py::TestGaussianHMMWithFullCovars::test_fit_with_priors[log]
FAILED hmmlearn/tests/test_gmm_hmm_multisequence.py::test_gmmhmm_multi_sequence_fit_invariant_to_sequence_ordering[scaling-diag]
FAILED hmmlearn/tests/test_gmm_hmm_multisequence.py::test_gmmhmm_multi_sequence_fit_invariant_to_sequence_ordering[scaling-spherical]
FAILED hmmlearn/tests/test_gmm_hmm_multisequence.py::test_gmmhmm_multi_sequence_fit_invariant_to_sequence_ordering[scaling-tied]
FAILED hmmlearn/tests/test_gmm_hmm_multisequence.py::test_gmmhmm_multi_sequence_fit_invariant_to_sequence_ordering[scaling-full]
FAILED hmmlearn/tests/test_gmm_hmm_multisequence.py::test_gmmhmm_multi_sequence_fit_invariant_to_sequence_ordering[log-diag]
FAILED hmmlearn/tests/test_gmm_hmm_multisequence.py::test_gmmhmm_multi_sequence_fit_invariant_to_sequence_ordering[log-spherical]
FAILED hmmlearn/tests/test_gmm_hmm_multisequence.py::test_gmmhmm_multi_sequence_fit_invariant_to_sequence_ordering[log-tied]
FAILED hmmlearn/tests/test_gmm_hmm_multisequence.py::test_gmmhmm_multi_sequence_fit_invariant_to_sequence_ordering[log-full]
FAILED hmmlearn/tests/test_gmm_hmm_new.py::TestGMMHMMWithSphericalCovars::test_score_samples_and_decode[scaling]
FAILED hmmlearn/tests/test_gmm_hmm_new.py::TestGMMHMMWithSphericalCovars::test_score_samples_and_decode[log]
FAILED hmmlearn/tests/test_gmm_hmm_new.py::TestGMMHMMWithSphericalCovars::test_fit[scaling]
FAILED hmmlearn/tests/test_gmm_hmm_new.py::TestGMMHMMWithSphericalCovars::test_fit[log]
FAILED hmmlearn/tests/test_gmm_hmm_new.py::TestGMMHMMWithSphericalCovars::test_fit_sparse_data[scaling]
FAILED hmmlearn/tests/test_gmm_hmm_new.py::TestGMMHMMWithSphericalCovars::test_fit_sparse_data[log]
FAILED hmmlearn/tests/test_gmm_hmm_new.py::TestGMMHMMWithSphericalCovars::test_criterion[scaling]
FAILED hmmlearn/tests/test_gmm_hmm_new.py::TestGMMHMMWithSphericalCovars::test_criterion[log]
FAILED hmmlearn/tests/test_gmm_hmm_new.py::TestGMMHMMWithDiagCovars::test_score_samples_and_decode[scaling]
FAILED hmmlearn/tests/test_gmm_hmm_new.py::TestGMMHMMWithDiagCovars::test_score_samples_and_decode[log]
FAILED hmmlearn/tests/test_gmm_hmm_new.py::TestGMMHMMWithDiagCovars::test_fit[scaling]
FAILED hmmlearn/tests/test_gmm_hmm_new.py::TestGMMHMMWithDiagCovars::test_fit[log]
FAILED hmmlearn/tests/test_gmm_hmm_new.py::TestGMMHMMWithDiagCovars::test_fit_sparse_data[scaling]
FAILED hmmlearn/tests/test_gmm_hmm_new.py::TestGMMHMMWithDiagCovars::test_fit_sparse_data[log]
FAILED hmmlearn/tests/test_gmm_hmm_new.py::TestGMMHMMWithDiagCovars::test_criterion[scaling]
FAILED hmmlearn/tests/test_gmm_hmm_new.py::TestGMMHMMWithDiagCovars::test_criterion[log]
FAILED hmmlearn/tests/test_gmm_hmm_new.py::TestGMMHMMWithTiedCovars::test_score_samples_and_decode[scaling]
FAILED hmmlearn/tests/test_gmm_hmm_new.py::TestGMMHMMWithTiedCovars::test_score_samples_and_decode[log]
FAILED hmmlearn/tests/test_gmm_hmm_new.py::TestGMMHMMWithTiedCovars::test_fit[scaling]
FAILED hmmlearn/tests/test_gmm_hmm_new.py::TestGMMHMMWithTiedCovars::test_fit[log]
FAILED hmmlearn/tests/test_gmm_hmm_new.py::TestGMMHMMWithTiedCovars::test_fit_sparse_data[scaling]
FAILED hmmlearn/tests/test_gmm_hmm_new.py::TestGMMHMMWithTiedCovars::test_fit_sparse_data[log]
FAILED hmmlearn/tests/test_gmm_hmm_new.py::TestGMMHMMWithTiedCovars::test_criterion[scaling]
FAILED hmmlearn/tests/test_gmm_hmm_new.py::TestGMMHMMWithTiedCovars::test_criterion[log]
FAILED hmmlearn/tests/test_gmm_hmm_new.py::TestGMMHMMWithFullCovars::test_score_samples_and_decode[scaling]
FAILED hmmlearn/tests/test_gmm_hmm_new.py::TestGMMHMMWithFullCovars::test_score_samples_and_decode[log]
FAILED hmmlearn/tests/test_gmm_hmm_new.py::TestGMMHMMWithFullCovars::test_fit[scaling]
FAILED hmmlearn/tests/test_gmm_hmm_new.py::TestGMMHMMWithFullCovars::test_fit[log]
FAILED hmmlearn/tests/test_gmm_hmm_new.py::TestGMMHMMWithFullCovars::test_fit_sparse_data[scaling]
FAILED hmmlearn/tests/test_gmm_hmm_new.py::TestGMMHMMWithFullCovars::test_fit_sparse_data[log]
FAILED hmmlearn/tests/test_gmm_hmm_new.py::TestGMMHMMWithFullCovars::test_criterion[scaling]
FAILED hmmlearn/tests/test_gmm_hmm_new.py::TestGMMHMMWithFullCovars::test_criterion[log]
FAILED hmmlearn/tests/test_gmm_hmm_new.py::TestGMMHMM_KmeansInit::test_kmeans[scaling]
FAILED hmmlearn/tests/test_gmm_hmm_new.py::TestGMMHMM_KmeansInit::test_kmeans[log]
FAILED hmmlearn/tests/test_gmm_hmm_new.py::TestGMMHMM_MultiSequence::test_chunked[diag]
FAILED hmmlearn/tests/test_gmm_hmm_new.py::TestGMMHMM_MultiSequence::test_chunked[spherical]
FAILED hmmlearn/tests/test_gmm_hmm_new.py::TestGMMHMM_MultiSequence::test_chunked[tied]
FAILED hmmlearn/tests/test_gmm_hmm_new.py::TestGMMHMM_MultiSequence::test_chunked[full]
FAILED hmmlearn/tests/test_multinomial_hmm.py::TestMultinomialHMM::test_score_samples[scaling]
FAILED hmmlearn/tests/test_multinomial_hmm.py::TestMultinomialHMM::test_score_samples[log]
FAILED hmmlearn/tests/test_multinomial_hmm.py::TestMultinomialHMM::test_fit[scaling]
FAILED hmmlearn/tests/test_multinomial_hmm.py::TestMultinomialHMM::test_fit[log]
FAILED hmmlearn/tests/test_multinomial_hmm.py::TestMultinomialHMM::test_fit_emissionprob[scaling]
FAILED hmmlearn/tests/test_multinomial_hmm.py::TestMultinomialHMM::test_fit_emissionprob[log]
FAILED hmmlearn/tests/test_multinomial_hmm.py::TestMultinomialHMM::test_fit_with_init[scaling]
FAILED hmmlearn/tests/test_multinomial_hmm.py::TestMultinomialHMM::test_fit_with_init[log]
FAILED hmmlearn/tests/test_multinomial_hmm.py::TestMultinomialHMM::test_compare_with_categorical_hmm[scaling]
FAILED hmmlearn/tests/test_multinomial_hmm.py::TestMultinomialHMM::test_compare_with_categorical_hmm[log]
FAILED hmmlearn/tests/test_poisson_hmm.py::TestPoissonHMM::test_score_samples[scaling]
FAILED hmmlearn/tests/test_poisson_hmm.py::TestPoissonHMM::test_score_samples[log]
FAILED hmmlearn/tests/test_poisson_hmm.py::TestPoissonHMM::test_fit[scaling]
FAILED hmmlearn/tests/test_poisson_hmm.py::TestPoissonHMM::test_fit[log] - Ru...
FAILED hmmlearn/tests/test_poisson_hmm.py::TestPoissonHMM::test_fit_lambdas[scaling]
FAILED hmmlearn/tests/test_poisson_hmm.py::TestPoissonHMM::test_fit_lambdas[log]
FAILED hmmlearn/tests/test_poisson_hmm.py::TestPoissonHMM::test_fit_with_init[scaling]
FAILED hmmlearn/tests/test_poisson_hmm.py::TestPoissonHMM::test_fit_with_init[log]
FAILED hmmlearn/tests/test_poisson_hmm.py::TestPoissonHMM::test_criterion[scaling]
FAILED hmmlearn/tests/test_poisson_hmm.py::TestPoissonHMM::test_criterion[log]
FAILED hmmlearn/tests/test_variational_categorical.py::TestVariationalCategorical::test_init_priors[scaling]
FAILED hmmlearn/tests/test_variational_categorical.py::TestVariationalCategorical::test_init_priors[log]
FAILED hmmlearn/tests/test_variational_categorical.py::TestVariationalCategorical::test_n_features[scaling]
FAILED hmmlearn/tests/test_variational_categorical.py::TestVariationalCategorical::test_n_features[log]
FAILED hmmlearn/tests/test_variational_categorical.py::TestVariationalCategorical::test_init_incorrect_priors[scaling]
FAILED hmmlearn/tests/test_variational_categorical.py::TestVariationalCategorical::test_init_incorrect_priors[log]
FAILED hmmlearn/tests/test_variational_categorical.py::TestVariationalCategorical::test_fit_beal[scaling]
FAILED hmmlearn/tests/test_variational_categorical.py::TestVariationalCategorical::test_fit_beal[log]
FAILED hmmlearn/tests/test_variational_categorical.py::TestVariationalCategorical::test_fit_and_compare_with_em[scaling]
FAILED hmmlearn/tests/test_variational_categorical.py::TestVariationalCategorical::test_fit_and_compare_with_em[log]
FAILED hmmlearn/tests/test_variational_categorical.py::TestVariationalCategorical::test_fit_length_1_sequences[scaling]
FAILED hmmlearn/tests/test_variational_categorical.py::TestVariationalCategorical::test_fit_length_1_sequences[log]
FAILED hmmlearn/tests/test_variational_gaussian.py::TestFull::test_random_fit[scaling]
FAILED hmmlearn/tests/test_variational_gaussian.py::TestFull::test_random_fit[log]
FAILED hmmlearn/tests/test_variational_gaussian.py::TestFull::test_fit_mcgrory_titterington1d[scaling]
FAILED hmmlearn/tests/test_variational_gaussian.py::TestFull::test_fit_mcgrory_titterington1d[log]
FAILED hmmlearn/tests/test_variational_gaussian.py::TestFull::test_common_initialization[scaling]
FAILED hmmlearn/tests/test_variational_gaussian.py::TestFull::test_common_initialization[log]
FAILED hmmlearn/tests/test_variational_gaussian.py::TestFull::test_initialization[scaling]
FAILED hmmlearn/tests/test_variational_gaussian.py::TestFull::test_initialization[log]
FAILED hmmlearn/tests/test_variational_gaussian.py::TestTied::test_random_fit[scaling]
FAILED hmmlearn/tests/test_variational_gaussian.py::TestTied::test_random_fit[log]
FAILED hmmlearn/tests/test_variational_gaussian.py::TestTied::test_fit_mcgrory_titterington1d[scaling]
FAILED hmmlearn/tests/test_variational_gaussian.py::TestTied::test_fit_mcgrory_titterington1d[log]
FAILED hmmlearn/tests/test_variational_gaussian.py::TestTied::test_common_initialization[scaling]
FAILED hmmlearn/tests/test_variational_gaussian.py::TestTied::test_common_initialization[log]
FAILED hmmlearn/tests/test_variational_gaussian.py::TestTied::test_initialization[scaling]
FAILED hmmlearn/tests/test_variational_gaussian.py::TestTied::test_initialization[log]
FAILED hmmlearn/tests/test_variational_gaussian.py::TestSpherical::test_random_fit[scaling]
FAILED hmmlearn/tests/test_variational_gaussian.py::TestSpherical::test_random_fit[log]
FAILED hmmlearn/tests/test_variational_gaussian.py::TestSpherical::test_fit_mcgrory_titterington1d[scaling]
FAILED hmmlearn/tests/test_variational_gaussian.py::TestSpherical::test_fit_mcgrory_titterington1d[log]
FAILED hmmlearn/tests/test_variational_gaussian.py::TestSpherical::test_common_initialization[scaling]
FAILED hmmlearn/tests/test_variational_gaussian.py::TestSpherical::test_common_initialization[log]
FAILED hmmlearn/tests/test_variational_gaussian.py::TestSpherical::test_initialization[scaling]
FAILED hmmlearn/tests/test_variational_gaussian.py::TestSpherical::test_initialization[log]
FAILED hmmlearn/tests/test_variational_gaussian.py::TestDiagonal::test_random_fit[scaling]
FAILED hmmlearn/tests/test_variational_gaussian.py::TestDiagonal::test_random_fit[log]
FAILED hmmlearn/tests/test_variational_gaussian.py::TestDiagonal::test_fit_mcgrory_titterington1d[scaling]
FAILED hmmlearn/tests/test_variational_gaussian.py::TestDiagonal::test_fit_mcgrory_titterington1d[log]
FAILED hmmlearn/tests/test_variational_gaussian.py::TestDiagonal::test_common_initialization[scaling]
FAILED hmmlearn/tests/test_variational_gaussian.py::TestDiagonal::test_common_initialization[log]
FAILED hmmlearn/tests/test_variational_gaussian.py::TestDiagonal::test_initialization[scaling]
FAILED hmmlearn/tests/test_variational_gaussian.py::TestDiagonal::test_initialization[log]
=========== 202 failed, 92 passed, 26 xfailed, 45 warnings in 27.18s ===========
E: pybuild pybuild:389: test: plugin pyproject failed with: exit code=1: cd /<<PKGBUILDDIR>>/.pybuild/cpython3_3.12_hmmlearn/build; python3.12 -m pytest --pyargs hmmlearn
dh_auto_test: error: pybuild --test --test-pytest -i python{version} -p "3.13 3.12" returned exit code 13
make: *** [debian/rules:9: binary-arch] Error 25
dpkg-buildpackage: error: debian/rules binary-arch subprocess returned exit status 2
--------------------------------------------------------------------------------
Build finished at 2024-10-04T23:11:26Z
Finished
--------
+------------------------------------------------------------------------------+
| Cleanup |
+------------------------------------------------------------------------------+
Purging /<<BUILDDIR>>
Not cleaning session: cloned chroot in use
E: Build failure (dpkg-buildpackage died)
+------------------------------------------------------------------------------+
| Summary |
+------------------------------------------------------------------------------+
Build Architecture: arm64
Build Type: any
Build-Space: 29284
Build-Time: 80
Distribution: sid
Fail-Stage: build
Host Architecture: arm64
Install-Time: 57
Job: /tmp/debusine-fetch-exec-upload-d9qujoev/python-hmmlearn_0.3.0-5.dsc
Machine Architecture: arm64
Package: python-hmmlearn
Package-Time: 181
Source-Version: 0.3.0-5
Space: 29284
Status: attempted
Version: 0.3.0-5+bd1
--------------------------------------------------------------------------------
Finished at 2024-10-04T23:11:26Z
Build needed 00:03:01, 29284k disk space