Compare commits

...

13 Commits

Author SHA1 Message Date
Angel Rodriguez c3e4b23dac Update README-en.md
Cambio de nombre estándar w3c
2024-11-25 18:45:24 +01:00
Angel Rodriguez 866a92b3d9 README.EN.md
Traducción a inglés del Readme. Actualizar en caso de existir modificaciones en el Readme en Español.
2024-11-25 18:42:55 +01:00
Vadim vtroshchinskiy fdb530eab3 Add original source 2024-11-14 13:23:15 +01:00
Vadim vtroshchinskiy 5a8d7dd303 Add oglive key to forgejo 2024-11-13 11:55:04 +01:00
Vadim vtroshchinskiy 927e24e13e Improve logging 2024-11-13 11:54:55 +01:00
Vadim vtroshchinskiy bd5d15fbe7 More detailed API logging 2024-11-13 11:53:54 +01:00
Vadim vtroshchinskiy 3f852d6924 Use packaged pyblkid 2024-11-13 08:25:02 +01:00
Vadim vtroshchinskiy 5cdd566df9 Add pyblkid debian files 2024-11-13 08:24:06 +01:00
Vadim vtroshchinskiy dbf0dda758 Add pylkid 2024-11-12 14:17:06 +00:00
Vadim vtroshchinskiy fc2cf5cd45 Add Debian packaging 2024-11-12 13:37:20 +01:00
Vadim vtroshchinskiy 5daeb8200f Initial package contents 2024-11-12 13:36:01 +01:00
Vadim vtroshchinskiy 1472ccbce6 Fix SSH key generation and extraction 2024-11-12 13:29:35 +01:00
Vadim vtroshchinskiy ec7c96fe49 Partial setsshkey implementation 2024-11-06 14:34:55 +01:00
92 changed files with 8232 additions and 16 deletions

127
README-en.md 100644
View File

@ -0,0 +1,127 @@
# OpenGnsys - Git
### Installation Guide to Deploy the "oggit" API
This guide outlines the steps necessary to download, install, configure, and deploy the "oggit" API on a server, including **Nginx** configuration and integration with **Nelmio API Doc**.
#### Prerequisites:
- **PHP 8.2** or higher.
- **Composer** installed.
- **Nginx** configured.
- **Symfony** 5.x or 6.x.
- **Git**.
### 1. Download the Repository
Clone the project repository to the server where you want to deploy the API. Make sure the server has access to Git.
### 2. Install Dependencies
Once the repository is downloaded, navigate to the project root and run the following command to install all necessary dependencies using **Composer**:
```bash
composer install
```
### 3. Configure Nginx
Next, configure **Nginx** to serve the Symfony project. Create a configuration file at `/etc/nginx/sites-available/oggit` or edit your existing Nginx configuration file.
```bash
sudo nano /etc/nginx/sites-available/oggit
```
Copy and paste the following configuration into the file:
```nginx
server {
listen 80;
server_name localhost; # Replace 'localhost' with the server's IP address if necessary
# Document root for the Symfony project
root /opt/oggit/public;
# Block to handle requests to /oggit
location /oggit {
try_files $uri $uri/ /index.php?$query_string;
# Increase the timeout for the install oglive
proxy_read_timeout 600;
proxy_connect_timeout 600;
proxy_send_timeout 600;
send_timeout 600;
}
# Block to handle requests to index.php
location ~ ^/index.php(/|$) {
include fastcgi_params;
fastcgi_pass unix:/run/php/php8.2-fpm.sock; # Make sure this is correct
fastcgi_split_path_info ^(.+\.php)(/.*)$;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_param PATH_INFO $fastcgi_path_info;
fastcgi_param DOCUMENT_ROOT $document_root;
internal;
}
# Block to return 404 for any PHP requests other than index.php
location ~ \.php$ {
return 404;
}
# Error and access logs for the Symfony project
error_log /var/log/nginx/oggit_error.log;
access_log /var/log/nginx/oggit_access.log;
location /api/doc {
try_files $uri /index.php?$query_string;
}
}
```
Save the changes and enable the site:
```bash
sudo ln -s /etc/nginx/sites-available/oggit /etc/nginx/sites-enabled/
```
Restart Nginx to apply the changes:
```bash
sudo systemctl restart nginx
```
### 4. Configure Nelmio API Doc
If you're using **NelmioApiDocBundle** for Swagger documentation, you need to configure **Nelmio** to accept specific routes, such as the `/oggit` routes. To do this, edit the **Nelmio** configuration file:
```bash
sudo nano /opt/oggit/config/packages/nelmio_api_doc.yaml
```
Make sure the file has the following configuration to allow API routes:
```yaml
nelmio_api_doc:
documentation:
info:
title: Oggit API
description: Oggit API Documentation
version: 1.0.0
areas: # filter documented areas
path_patterns:
- ^/oggit
```
Save the changes.
### 5. Verify the Installation
1. **Check the API**: Access your API at `http://<server-IP>/oggit`. If everything is correctly configured, you should be able to make requests to the defined endpoints.
2. **Swagger Documentation**: Access the Swagger API documentation at `http://<server-IP>/api/doc` to view the documented endpoints.
### 6. Logs and Debugging
Nginx logs can be checked to verify any errors during deployment. The logs are located at the following paths:
- **Error logs**: `/var/log/nginx/oggit_error.log`
- **Access logs**: `/var/log/nginx/oggit_access.log`

View File

@ -9,6 +9,10 @@
# Must have working locales, or unicode strings will fail. Install 'locales', configure /etc/locale.gen, run locale-gen.
#
import os
import sys
sys.path.insert(0, "/opt/opengnsys/python3/dist-packages")
import shutil
import argparse
@ -17,7 +21,7 @@ import logging
import subprocess
import json
import sys
from pathlib import Path

View File

@ -9,16 +9,27 @@ Para instalar dependencias de python se usa el modulo venv (https://docs.python.
## Ubuntu 24.04
sudo apt install python3-git python3-libarchive-c python3-termcolor bsdextrautils
sudo apt install python3-git opengnsys-libarchive-c python3-termcolor bsdextrautils
## Agregar clave de SSH si es necesario
## Agregar claves de SSH a oglive
El proceso falla si no hay clave de SSH en la imagen. Utilizar:
El sistema de Git accede al ogrepository por SSH. Para funcionar, necesita que el oglive tenga una clave de SSH, y que el ogrepository la acepte.
/opt/opengnsys/bin/setsslkey
El instalador de Git puede realizar los cambios requeridos, con:
para agregarla.
./opengnsys_git_installer.py --set-ssh-key
O para hacerlo contra un oglive especifico:
./opengnsys_git_installer.py --set-ssh-key --oglive 1 # numero de oglive
Ejecutar este comando agrega la clave de SSH a Forgejo automáticamente.
La clave existente puede extraerse con:
./opengnsys_git_installer.py --extract-ssh-key --quiet
# Ejecutar
@ -34,7 +45,11 @@ Forgejo gestiona los repositorios y el acceso por SSH, por lo cual debe quedarse
El usuario por defecto es `oggit` con password `opengnsys`.
# Paquetes con dependencias
El sistema OgGit requiere módulos de Python que no vienen en Ubuntu 24.04 o tienen versiones demasiado antiguas.
Los fuentes de los paquetes se encuentran en oggit/packages.
# Documentación de código fuente

View File

@ -2,6 +2,10 @@
"""Script para la instalación del repositorio git"""
import os
import sys
sys.path.insert(0, "/opt/opengnsys/python3/dist-packages")
import shutil
import argparse
import tempfile
@ -13,12 +17,16 @@ import grp
from termcolor import cprint
import git
import libarchive
from libarchive.extract import *
#from libarchive.entry import FileType
import urllib.request
import pathlib
import socket
import time
import requests
import tempfile
import hashlib
#FORGEJO_VERSION="8.0.3"
FORGEJO_VERSION="9.0.0"
@ -125,8 +133,13 @@ class OpengnsysGitInstaller:
self.temp_dir = None
self.script_path = os.path.realpath(os.path.dirname(__file__))
# Possible names for SSH key
# Possible names for SSH public keys
self.ssh_key_users = ["root", "opengnsys"]
self.key_names = ["id_rsa.pub", "id_ed25519.pub", "id_ecdsa.pub", "id_ed25519_sk.pub", "id_ecdsa_sk.pub"]
# Possible names for SSH key in oglive
self.key_paths = ["scripts/ssl/id_rsa.pub", "scripts/ssl/id_ed25519.pub", "scripts/ssl/id_ecdsa.pub", "scripts/ssl/id_ed25519_sk.pub", "scripts/ssl/id_ecdsa_sk.pub"]
self.key_paths_dict = {}
for kp in self.key_paths:
@ -303,9 +316,13 @@ class OpengnsysGitInstaller:
public_key = None
with libarchive.file_reader(client_initrd_path) as initrd:
for file in initrd:
#self.__logger.debug("Archivo: %s", file)
self.__logger.debug("Archivo: %s", file)
if file.pathname in self.key_paths_dict:
pathname = file.pathname;
if pathname.startswith("./"):
pathname = pathname[2:]
if pathname in self.key_paths_dict:
data = bytearray()
for block in file.get_blocks():
data = data + block
@ -318,6 +335,134 @@ class OpengnsysGitInstaller:
return public_key
def set_ssh_key(self, client_num = None):
INITRD = "oginitrd.img"
tftp_dir = os.path.join(self.base_path, "tftpboot")
if client_num is None:
self.__logger.info("Will modify default client")
client_num = self.oglive.get_default()
ogclient = self.oglive.get_clients()[client_num]
client_initrd_path = os.path.join(tftp_dir, ogclient, INITRD)
client_initrd_path_new = client_initrd_path + ".new"
self.__logger.debug("initrd path for ogclient %s is %s", ogclient, client_initrd_path)
temp_dir = tempfile.TemporaryDirectory()
temp_dir_path = temp_dir.name
#temp_dir_path = "/tmp/extracted"
if os.path.exists(temp_dir_path):
shutil.rmtree(temp_dir_path)
pathlib.Path(temp_dir_path).mkdir(parents=True, exist_ok = True)
self.__logger.debug("Uncompressing initrd %s into %s", client_initrd_path, temp_dir_path)
os.chdir(temp_dir_path)
libarchive.extract_file(client_initrd_path, flags = EXTRACT_UNLINK | EXTRACT_OWNER | EXTRACT_PERM | EXTRACT_FFLAGS | EXTRACT_TIME)
ssh_key_dir = os.path.join(temp_dir_path, "scripts", "ssl")
client_key_path = os.path.join(ssh_key_dir, "id_ed25519")
authorized_keys_path = os.path.join(ssh_key_dir, "authorized_keys")
oglive_public_key = ""
# Create a SSH key on the oglive, if needed
pathlib.Path(ssh_key_dir).mkdir(parents=True, exist_ok=True)
if os.path.exists(client_key_path):
self.__logger.info("Creating SSH key not necessary, it already is in the initrd")
else:
self.__logger.info("Writing new SSH key into %s", client_key_path)
subprocess.run(["/usr/bin/ssh-keygen", "-t", "ed25519", "-N", "", "-f", client_key_path], check=True)
with open(client_key_path + ".pub", "r", encoding="utf-8") as pubkey:
oglive_public_key = pubkey.read()
# Add our public keys to the oglive, so that we can log in
public_keys = ""
for username in self.ssh_key_users:
self.__logger.debug("Looking for keys in user %s", username)
homedir = pwd.getpwnam(username).pw_dir
for key in self.key_names:
key_path = os.path.join(homedir, ".ssh", key)
self.__logger.debug("Checking if we have %s...", key_path)
if os.path.exists(key_path):
with open(key_path, "r", encoding='utf-8') as public_key_file:
self.__logger.info("Adding %s to authorized_keys", key_path)
public_key = public_key_file.read()
public_keys = public_keys + public_key + "\n"
self.__logger.debug("Writing %s", authorized_keys_path)
with open(authorized_keys_path, "w", encoding='utf-8') as auth_keys:
auth_keys.write(public_keys)
# hardlinks in the source package are not correctly packaged back as hardlinks.
# Taking the easy option of turning them into symlinks for now.
file_hashes = {}
with libarchive.file_writer(client_initrd_path_new, "cpio_newc", "zstd") as writer:
file_list = []
for root, subdirs, files in os.walk(temp_dir_path):
proot = pathlib.PurePosixPath(root)
relpath = proot.relative_to(temp_dir_path)
for file in files:
full_path = os.path.join(relpath, file)
#self.__logger.debug("%s", full_path)
digest = None
stat_data = os.stat(full_path)
with open(full_path, "rb") as in_file:
digest = hashlib.file_digest(in_file, "sha256").hexdigest()
if stat_data.st_size > 0 and not os.path.islink(full_path):
if digest in file_hashes:
target_path = pathlib.Path(file_hashes[digest])
link_path = target_path.relative_to(relpath, walk_up=True)
self.__logger.debug("%s was a duplicate of %s, linking to %s", full_path, file_hashes[digest], link_path)
os.unlink(full_path)
#os.link(file_hashes[digest], full_path)
os.symlink(link_path, full_path)
else:
file_hashes[digest] = full_path
writer.add_files(".", recursive=True )
os.rename(client_initrd_path, client_initrd_path + ".old")
if os.path.exists(client_initrd_path + ".sum"):
os.rename(client_initrd_path + ".sum", client_initrd_path + ".sum.old")
os.rename(client_initrd_path_new, client_initrd_path)
with open(client_initrd_path, "rb") as initrd_file:
hexdigest = hashlib.file_digest(initrd_file, "sha256").hexdigest()
with open(client_initrd_path + ".sum", "w", encoding="utf-8") as digest_file:
digest_file.write(hexdigest + "\n")
self.__logger.info("Updated initrd %s", client_initrd_path)
self.add_forgejo_sshkey(oglive_public_key, "Key for " + ogclient)
def install(self):
"""Instalar
@ -590,7 +735,7 @@ class OpengnsysGitInstaller:
timeout = 60
)
self.__logger.info("Request status was %i", r.status_code)
self.__logger.info("Request status was %i, content %s", r.status_code, r.content)
def add_forgejo_sshkey(self, pubkey, description = ""):
token = ""
@ -611,7 +756,7 @@ class OpengnsysGitInstaller:
timeout = 60
)
self.__logger.info("Request status was %i", r.status_code)
self.__logger.info("Request status was %i, content %s", r.status_code, r.content)
def add_forgejo_organization(self, pubkey, description = ""):
token = ""
@ -632,15 +777,36 @@ class OpengnsysGitInstaller:
timeout = 60
)
self.__logger.info("Request status was %i", r.status_code)
self.__logger.info("Request status was %i, content %s", r.status_code, r.content)
if __name__ == '__main__':
logging.basicConfig(level=logging.DEBUG, format='%(asctime)s - %(name)20s - [%(levelname)5s] - %(message)s')
logger = logging.getLogger(__name__)
sys.stdout.reconfigure(encoding='utf-8')
opengnsys_log_dir = "/opt/opengnsys/log"
logger = logging.getLogger(__package__)
logger.setLevel(logging.DEBUG)
logger.info("Inicio del programa")
streamLog = logging.StreamHandler()
streamLog.setLevel(logging.INFO)
if not os.path.exists(opengnsys_log_dir):
os.mkdir(opengnsys_log_dir)
logFilePath = f"{opengnsys_log_dir}/git_installer.log"
fileLog = logging.FileHandler(logFilePath)
fileLog.setLevel(logging.DEBUG)
formatter = logging.Formatter('%(asctime)s - %(name)24s - [%(levelname)5s] - %(message)s')
streamLog.setFormatter(formatter)
fileLog.setFormatter(formatter)
logger.addHandler(streamLog)
logger.addHandler(fileLog)
parser = argparse.ArgumentParser(
prog="OpenGnsys Installer",
@ -653,9 +819,22 @@ if __name__ == '__main__':
parser.add_argument('--ignoresshkey', action='store_true', help="Ignorar clave de SSH")
parser.add_argument('--usesshkey', type=str, help="Usar clave SSH especificada")
parser.add_argument('--test-createuser', action='store_true')
parser.add_argument('--extract-ssh-key', action='store_true', help="Extract SSH key from oglive")
parser.add_argument('--set-ssh-key', action='store_true', help="Configure SSH key in oglive")
parser.add_argument('--oglive', type=int, metavar='NUM', help = "Do SSH key manipulation on this oglive")
parser.add_argument('--quiet', action='store_true', help="Quiet console output")
parser.add_argument("-v", "--verbose", action="store_true", help = "Verbose console output")
args = parser.parse_args()
if args.quiet:
streamLog.setLevel(logging.WARNING)
if args.verbose:
streamLog.setLevel(logging.DEBUG)
installer = OpengnsysGitInstaller()
installer.set_testmode(args.testmode)
installer.set_ignoresshkey(args.ignoresshkey)
@ -670,6 +849,12 @@ if __name__ == '__main__':
installer.add_forgejo_repo("linux")
elif args.test_createuser:
installer.set_ssh_user_group("oggit2", "oggit2")
elif args.extract_ssh_key:
key = installer._extract_ssh_key()
print(f"{key}")
elif args.set_ssh_key:
installer.set_ssh_key()
else:
installer.install()
installer.install_forgejo()

View File

@ -0,0 +1 @@
version.py export-subst

View File

@ -0,0 +1 @@
liberapay: Changaco

View File

@ -0,0 +1,36 @@
name: CI
on:
# Trigger the workflow on push or pull request events but only for the master branch
push:
branches: [ master ]
pull_request:
branches: [ master ]
# Allow running this workflow manually from the Actions tab
workflow_dispatch:
jobs:
test:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
- name: Install libarchive
run: sudo apt-get install -y libarchive13
- name: Install Python 3.11
uses: actions/setup-python@v2
with:
python-version: '3.11'
- name: Install Python 3.10
uses: actions/setup-python@v2
with:
python-version: '3.10'
- name: Install Python 3.9
uses: actions/setup-python@v2
with:
python-version: '3.9'
- name: Install Python 3.8
uses: actions/setup-python@v2
with:
python-version: '3.8'
- name: Install tox
run: pip install tox
- name: Run the tests
run: tox

View File

@ -0,0 +1,8 @@
*.egg-info/
/build/
/dist/
/env/
/htmlcov/
.coverage
*.pyc
.tox/

View File

@ -0,0 +1 @@
https://creativecommons.org/publicdomain/zero/1.0/

View File

@ -0,0 +1 @@
include version.py

View File

@ -0,0 +1,147 @@
Metadata-Version: 2.1
Name: libarchive-c
Version: 5.1
Summary: Python interface to libarchive
Home-page: https://github.com/Changaco/python-libarchive-c
Author: Changaco
Author-email: changaco@changaco.oy.lc
License: CC0
Keywords: archive libarchive 7z tar bz2 zip gz
Description-Content-Type: text/x-rst
License-File: LICENSE.md
A Python interface to libarchive. It uses the standard ctypes_ module to
dynamically load and access the C library.
.. _ctypes: https://docs.python.org/3/library/ctypes.html
Installation
============
pip install libarchive-c
Compatibility
=============
python
------
python-libarchive-c is currently tested with python 3.8, 3.9, 3.10 and 3.11.
If you find an incompatibility with older versions you can send us a small patch,
but we won't accept big changes.
libarchive
----------
python-libarchive-c may not work properly with obsolete versions of libarchive such as the ones included in MacOS. In that case you can install a recent version of libarchive (e.g. with ``brew install libarchive`` on MacOS) and use the ``LIBARCHIVE`` environment variable to point python-libarchive-c to it::
export LIBARCHIVE=/usr/local/Cellar/libarchive/3.3.3/lib/libarchive.13.dylib
Usage
=====
Import::
import libarchive
Extracting archives
-------------------
To extract an archive, use the ``extract_file`` function::
os.chdir('/path/to/target/directory')
libarchive.extract_file('test.zip')
Alternatively, the ``extract_memory`` function can be used to extract from a buffer,
and ``extract_fd`` from a file descriptor.
The ``extract_*`` functions all have an integer ``flags`` argument which is passed
directly to the C function ``archive_write_disk_set_options()``. You can import
the ``EXTRACT_*`` constants from the ``libarchive.extract`` module and see the
official description of each flag in the ``archive_write_disk(3)`` man page.
By default, when the ``flags`` argument is ``None``, the ``SECURE_NODOTDOT``,
``SECURE_NOABSOLUTEPATHS`` and ``SECURE_SYMLINKS`` flags are passed to
libarchive, unless the current directory is the root (``/``).
Reading archives
----------------
To read an archive, use the ``file_reader`` function::
with libarchive.file_reader('test.7z') as archive:
for entry in archive:
for block in entry.get_blocks():
...
Alternatively, the ``memory_reader`` function can be used to read from a buffer,
``fd_reader`` from a file descriptor, ``stream_reader`` from a stream object
(which must support the standard ``readinto`` method), and ``custom_reader``
from anywhere using callbacks.
To learn about the attributes of the ``entry`` object, see the ``libarchive/entry.py``
source code or run ``help(libarchive.entry.ArchiveEntry)`` in a Python shell.
Displaying progress
~~~~~~~~~~~~~~~~~~~
If your program processes large archives, you can keep track of its progress
with the ``bytes_read`` attribute. Here's an example of a progress bar using
`tqdm <https://pypi.org/project/tqdm/>`_::
with tqdm(total=os.stat(archive_path).st_size, unit='bytes') as pbar, \
libarchive.file_reader(archive_path) as archive:
for entry in archive:
...
pbar.update(archive.bytes_read - pbar.n)
Creating archives
-----------------
To create an archive, use the ``file_writer`` function::
from libarchive.entry import FileType
with libarchive.file_writer('test.tar.gz', 'ustar', 'gzip') as archive:
# Add the `libarchive/` directory and everything in it (recursively),
# then the `README.rst` file.
archive.add_files('libarchive/', 'README.rst')
# Add a regular file defined from scratch.
data = b'foobar'
archive.add_file_from_memory('../escape-test', len(data), data)
# Add a directory defined from scratch.
early_epoch = (42, 42) # 1970-01-01 00:00:42.000000042
archive.add_file_from_memory(
'metadata-test', 0, b'',
filetype=FileType.DIRECTORY, permission=0o755, uid=4242, gid=4242,
atime=early_epoch, mtime=early_epoch, ctime=early_epoch, birthtime=early_epoch,
)
Alternatively, the ``memory_writer`` function can be used to write to a memory buffer,
``fd_writer`` to a file descriptor, and ``custom_writer`` to a callback function.
For each of those functions, the mandatory second argument is the archive format,
and the optional third argument is the compression format (called “filter” in
libarchive). The acceptable values are listed in ``libarchive.ffi.WRITE_FORMATS``
and ``libarchive.ffi.WRITE_FILTERS``.
File metadata codecs
--------------------
By default, UTF-8 is used to read and write file attributes from and to archives.
A different codec can be specified through the ``header_codec`` arguments of the
``*_reader`` and ``*_writer`` functions. Example::
with libarchive.file_writer('test.tar', 'ustar', header_codec='cp037') as archive:
...
with file_reader('test.tar', header_codec='cp037') as archive:
...
In addition to file paths (``pathname`` and ``linkpath``), the specified codec is
used to encode and decode user and group names (``uname`` and ``gname``).
License
=======
`CC0 Public Domain Dedication <http://creativecommons.org/publicdomain/zero/1.0/>`_

View File

@ -0,0 +1,135 @@
A Python interface to libarchive. It uses the standard ctypes_ module to
dynamically load and access the C library.
.. _ctypes: https://docs.python.org/3/library/ctypes.html
Installation
============
pip install libarchive-c
Compatibility
=============
python
------
python-libarchive-c is currently tested with python 3.8, 3.9, 3.10 and 3.11.
If you find an incompatibility with older versions you can send us a small patch,
but we won't accept big changes.
libarchive
----------
python-libarchive-c may not work properly with obsolete versions of libarchive such as the ones included in MacOS. In that case you can install a recent version of libarchive (e.g. with ``brew install libarchive`` on MacOS) and use the ``LIBARCHIVE`` environment variable to point python-libarchive-c to it::
export LIBARCHIVE=/usr/local/Cellar/libarchive/3.3.3/lib/libarchive.13.dylib
Usage
=====
Import::
import libarchive
Extracting archives
-------------------
To extract an archive, use the ``extract_file`` function::
os.chdir('/path/to/target/directory')
libarchive.extract_file('test.zip')
Alternatively, the ``extract_memory`` function can be used to extract from a buffer,
and ``extract_fd`` from a file descriptor.
The ``extract_*`` functions all have an integer ``flags`` argument which is passed
directly to the C function ``archive_write_disk_set_options()``. You can import
the ``EXTRACT_*`` constants from the ``libarchive.extract`` module and see the
official description of each flag in the ``archive_write_disk(3)`` man page.
By default, when the ``flags`` argument is ``None``, the ``SECURE_NODOTDOT``,
``SECURE_NOABSOLUTEPATHS`` and ``SECURE_SYMLINKS`` flags are passed to
libarchive, unless the current directory is the root (``/``).
Reading archives
----------------
To read an archive, use the ``file_reader`` function::
with libarchive.file_reader('test.7z') as archive:
for entry in archive:
for block in entry.get_blocks():
...
Alternatively, the ``memory_reader`` function can be used to read from a buffer,
``fd_reader`` from a file descriptor, ``stream_reader`` from a stream object
(which must support the standard ``readinto`` method), and ``custom_reader``
from anywhere using callbacks.
To learn about the attributes of the ``entry`` object, see the ``libarchive/entry.py``
source code or run ``help(libarchive.entry.ArchiveEntry)`` in a Python shell.
Displaying progress
~~~~~~~~~~~~~~~~~~~
If your program processes large archives, you can keep track of its progress
with the ``bytes_read`` attribute. Here's an example of a progress bar using
`tqdm <https://pypi.org/project/tqdm/>`_::
with tqdm(total=os.stat(archive_path).st_size, unit='bytes') as pbar, \
libarchive.file_reader(archive_path) as archive:
for entry in archive:
...
pbar.update(archive.bytes_read - pbar.n)
Creating archives
-----------------
To create an archive, use the ``file_writer`` function::
from libarchive.entry import FileType
with libarchive.file_writer('test.tar.gz', 'ustar', 'gzip') as archive:
# Add the `libarchive/` directory and everything in it (recursively),
# then the `README.rst` file.
archive.add_files('libarchive/', 'README.rst')
# Add a regular file defined from scratch.
data = b'foobar'
archive.add_file_from_memory('../escape-test', len(data), data)
# Add a directory defined from scratch.
early_epoch = (42, 42) # 1970-01-01 00:00:42.000000042
archive.add_file_from_memory(
'metadata-test', 0, b'',
filetype=FileType.DIRECTORY, permission=0o755, uid=4242, gid=4242,
atime=early_epoch, mtime=early_epoch, ctime=early_epoch, birthtime=early_epoch,
)
Alternatively, the ``memory_writer`` function can be used to write to a memory buffer,
``fd_writer`` to a file descriptor, and ``custom_writer`` to a callback function.
For each of those functions, the mandatory second argument is the archive format,
and the optional third argument is the compression format (called “filter” in
libarchive). The acceptable values are listed in ``libarchive.ffi.WRITE_FORMATS``
and ``libarchive.ffi.WRITE_FILTERS``.
File metadata codecs
--------------------
By default, UTF-8 is used to read and write file attributes from and to archives.
A different codec can be specified through the ``header_codec`` arguments of the
``*_reader`` and ``*_writer`` functions. Example::
with libarchive.file_writer('test.tar', 'ustar', header_codec='cp037') as archive:
...
with file_reader('test.tar', header_codec='cp037') as archive:
...
In addition to file paths (``pathname`` and ``linkpath``), the specified codec is
used to encode and decode user and group names (``uname`` and ``gname``).
License
=======
`CC0 Public Domain Dedication <http://creativecommons.org/publicdomain/zero/1.0/>`_

View File

@ -0,0 +1,5 @@
opengnsys-libarchive-c (5.1) UNRELEASED; urgency=medium
* Initial release. (Closes: #XXXXXX)
-- root <opengnsys@opengnsys.com> Mon, 11 Nov 2024 17:11:16 +0000

View File

@ -0,0 +1,29 @@
Source: opengnsys-libarchive-c
Maintainer: OpenGnsys <opengnsys@opengnsys.org>
XSBC-Original-Maintainer: Jérémy Bobbio <lunar@debian.org>
Section: python
Priority: optional
Build-Depends: debhelper-compat (= 12),
dh-python,
libarchive-dev,
python3-all,
python3-mock,
python3-pytest,
python3-setuptools
Standards-Version: 4.5.0
Rules-Requires-Root: no
Homepage: https://github.com/Changaco/python-libarchive-c
Vcs-Browser: https://salsa.debian.org/debian/python-libarchive-c
Vcs-Git: https://salsa.debian.org/debian/python-libarchive-c.git
Package: opengnsys-libarchive-c
Architecture: all
Depends: ${lib:Depends}, ${misc:Depends}, ${python3:Depends}
Description: Python3 interface to libarchive
The libarchive library provides a flexible interface for reading and writing
archives in various formats such as tar and cpio. libarchive also supports
reading and writing archives compressed using various compression filters such
as gzip and bzip2.
.
This package contains a Python3 interface to libarchive written using the
standard ctypes module to dynamically load and access the C library.

View File

@ -0,0 +1,208 @@
Format: https://www.debian.org/doc/packaging-manuals/copyright-format/1.0/
Upstream-Name: python-libarchive-c
Source: https://github.com/Changaco/python-libarchive-c
Files: *
Copyright: 2014-2018 Changaco <changaco@changaco.oy.lc>
License: CC-0
Files: tests/surrogateescape.py
Copyright: 2015 Changaco <changaco@changaco.oy.lc>
2011-2013 Victor Stinner <victor.stinner@gmail.com>
License: BSD-2-clause or PSF-2
Files: debian/*
Copyright: 2015 Jerémy Bobbio <lunar@debian.org>
2019 Mattia Rizzolo <mattia@debian.org>
License: permissive
Copying and distribution of this package, with or without
modification, are permitted in any medium without royalty
provided the copyright notice and this notice are
preserved.
License: BSD-2-clause
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions
are met:
* Redistributions of source code must retain the above copyright
notice, this list of conditions and the following disclaimer.
* Redistributions in binary form must reproduce the above copyright
notice, this list of conditions and the following disclaimer in
the documentation and/or other materials provided with the
distribution.
.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
"AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS
FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE
COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT,
INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING,
BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS
OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED
AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT
OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF
SUCH DAMAGE.
License: PSF-2
1. This LICENSE AGREEMENT is between the Python Software Foundation ("PSF"),
and the Individual or Organization ("Licensee") accessing and otherwise using
this software ("Python") in source or binary form and its associated
documentation.
.
2. Subject to the terms and conditions of this License Agreement, PSF hereby
grants Licensee a nonexclusive, royalty-free, world-wide license to
reproduce, analyze, test, perform and/or display publicly, prepare derivative
works, distribute, and otherwise use Python alone or in any derivative
version, provided, however, that PSF's License Agreement and PSF's notice of
copyright, i.e., "Copyright (c) 2001, 2002, 2003, 2004, 2005, 2006 Python
Software Foundation; All Rights Reserved" are retained in Python alone or in
any derivative version prepared by Licensee.
.
3. In the event Licensee prepares a derivative work that is based on or
incorporates Python or any part thereof, and wants to make the derivative
work available to others as provided herein, then Licensee hereby agrees to
include in any such work a brief summary of the changes made to Python.
.
4. PSF is making Python available to Licensee on an "AS IS" basis. PSF MAKES
NO REPRESENTATIONS OR WARRANTIES, EXPRESS OR IMPLIED. BY WAY OF EXAMPLE, BUT
NOT LIMITATION, PSF MAKES NO AND DISCLAIMS ANY REPRESENTATION OR WARRANTY OF
MERCHANTABILITY OR FITNESS FOR ANY PARTICULAR PURPOSE OR THAT THE USE OF
PYTHON WILL NOT INFRINGE ANY THIRD PARTY RIGHTS.
.
5. PSF SHALL NOT BE LIABLE TO LICENSEE OR ANY OTHER USERS OF PYTHON FOR ANY
INCIDENTAL, SPECIAL, OR CONSEQUENTIAL DAMAGES OR LOSS AS A RESULT OF
MODIFYING, DISTRIBUTING, OR OTHERWISE USING PYTHON, OR ANY DERIVATIVE
THEREOF, EVEN IF ADVISED OF THE POSSIBILITY THEREOF.
.
6. This License Agreement will automatically terminate upon a material breach
of its terms and conditions.
.
7. Nothing in this License Agreement shall be deemed to create any
relationship of agency, partnership, or joint venture between PSF and
Licensee. This License Agreement does not grant permission to use PSF
trademarks or trade name in a trademark sense to endorse or promote products
or services of Licensee, or any third party.
.
8. By copying, installing or otherwise using Python, Licensee agrees to be
bound by the terms and conditions of this License Agreement.
License: CC-0
Statement of Purpose
.
The laws of most jurisdictions throughout the world automatically
confer exclusive Copyright and Related Rights (defined below) upon
the creator and subsequent owner(s) (each and all, an "owner") of an
original work of authorship and/or a database (each, a "Work").
.
Certain owners wish to permanently relinquish those rights to a Work
for the purpose of contributing to a commons of creative, cultural
and scientific works ("Commons") that the public can reliably and
without fear of later claims of infringement build upon, modify,
incorporate in other works, reuse and redistribute as freely as
possible in any form whatsoever and for any purposes, including
without limitation commercial purposes. These owners may contribute
to the Commons to promote the ideal of a free culture and the further
production of creative, cultural and scientific works, or to gain
reputation or greater distribution for their Work in part through the
use and efforts of others.
.
For these and/or other purposes and motivations, and without any
expectation of additional consideration or compensation, the person
associating CC0 with a Work (the "Affirmer"), to the extent that he
or she is an owner of Copyright and Related Rights in the Work,
voluntarily elects to apply CC0 to the Work and publicly distribute
the Work under its terms, with knowledge of his or her Copyright and
Related Rights in the Work and the meaning and intended legal effect
of CC0 on those rights.
.
1. Copyright and Related Rights. A Work made available under CC0 may
be protected by copyright and related or neighboring rights
("Copyright and Related Rights"). Copyright and Related Rights
include, but are not limited to, the following:
.
i. the right to reproduce, adapt, distribute, perform, display,
communicate, and translate a Work;
ii. moral rights retained by the original author(s) and/or
performer(s);
iii. publicity and privacy rights pertaining to a person's image
or likeness depicted in a Work;
iv. rights protecting against unfair competition in regards to a
Work, subject to the limitations in paragraph 4(a), below;
v. rights protecting the extraction, dissemination, use and
reuse of data in a Work;
vi. database rights (such as those arising under Directive
96/9/EC of the European Parliament and of the Council of 11
March 1996 on the legal protection of databases, and under
any national implementation thereof, including any amended or
successor version of such directive); and
vii. other similar, equivalent or corresponding rights throughout
the world based on applicable law or treaty, and any national
implementations thereof.
.
2. Waiver. To the greatest extent permitted by, but not in
contravention of, applicable law, Affirmer hereby overtly, fully,
permanently, irrevocably and unconditionally waives, abandons, and
surrenders all of Affirmer's Copyright and Related Rights and
associated claims and causes of action, whether now known or
unknown (including existing as well as future claims and causes of
action), in the Work (i) in all territories worldwide, (ii) for
the maximum duration provided by applicable law or treaty
(including future time extensions), (iii) in any current or future
medium and for any number of copies, and (iv) for any purpose
whatsoever, including without limitation commercial, advertising
or promotional purposes (the "Waiver"). Affirmer makes the Waiver
for the benefit of each member of the public at large and to the
detriment of Affirmer's heirs and successors, fully intending that
such Waiver shall not be subject to revocation, rescission,
cancellation, termination, or any other legal or equitable action
to disrupt the quiet enjoyment of the Work by the public as
contemplated by Affirmer's express Statement of Purpose.
.
3. Public License Fallback. Should any part of the Waiver for any
reason be judged legally invalid or ineffective under applicable law,
then the Waiver shall be preserved to the maximum extent permitted
taking into account Affirmer's express Statement of Purpose. In
addition, to the extent the Waiver is so judged Affirmer hereby
grants to each affected person a royalty-free, non transferable, non
sublicensable, non exclusive, irrevocable and unconditional license
to exercise Affirmer's Copyright and Related Rights in the Work (i)
in all territories worldwide, (ii) for the maximum duration provided
by applicable law or treaty (including future time extensions), (iii)
in any current or future medium and for any number of copies, and
(iv) for any purpose whatsoever, including without limitation
commercial, advertising or promotional purposes (the "License"). The
License shall be deemed effective as of the date CC0 was applied by
Affirmer to the Work. Should any part of the License for any reason
be judged legally invalid or ineffective under applicable law, such
partial invalidity or ineffectiveness shall not invalidate the
remainder of the License, and in such case Affirmer hereby affirms
that he or she will not (i) exercise any of his or her remaining
Copyright and Related Rights in the Work or (ii) assert any
associated claims and causes of action with respect to the Work, in
either case contrary to Affirmer's express Statement of Purpose.
.
4. Limitations and Disclaimers.
.
a. No trademark or patent rights held by Affirmer are waived,
abandoned, surrendered, licensed or otherwise affected by
this document.
b. Affirmer offers the Work as-is and makes no representations
or warranties of any kind concerning the Work, express,
implied, statutory or otherwise, including without limitation
warranties of title, merchantability, fitness for a
particular purpose, non infringement, or the absence of
latent or other defects, accuracy, or the present or absence
of errors, whether or not discoverable, all to the greatest
extent permissible under applicable law.
c. Affirmer disclaims responsibility for clearing rights of
other persons that may apply to the Work or any use thereof,
including without limitation any person's Copyright and
Related Rights in the Work. Further, Affirmer disclaims
responsibility for obtaining any necessary consents,
permissions or other rights required for any use of the
Work.
d. Affirmer understands and acknowledges that Creative Commons
is not a party to this document and has no duty or obligation
with respect to this CC0 or use of the Work.

View File

@ -0,0 +1,2 @@
opengnsys-libarchive-c_5.1_all.deb python optional
opengnsys-libarchive-c_5.1_amd64.buildinfo python optional

View File

@ -0,0 +1,2 @@
misc:Depends=
misc:Pre-Depends=

View File

@ -0,0 +1,22 @@
#!/usr/bin/make -f
export LC_ALL=C.UTF-8
export PYBUILD_NAME = libarchive-c
export PYBUILD_BEFORE_TEST = cp -av README.rst {build_dir}
export PYBUILD_TEST_ARGS = -vv -s
export PYBUILD_AFTER_TEST = rm -v {build_dir}/README.rst
# ./usr/lib/python3/dist-packages/libarchive/
export PYBUILD_INSTALL_ARGS=--install-lib=/opt/opengnsys/python3/dist-packages/
%:
dh $@ --with python3 --buildsystem=pybuild
override_dh_gencontrol:
dh_gencontrol -- \
-Vlib:Depends=$(shell dpkg-query -W -f '$${Depends}' libarchive-dev \
| sed -E 's/.*(libarchive[[:alnum:].-]+).*/\1/')
override_dh_installdocs:
# Nothing, we don't want docs
override_dh_installchangelogs:
# Nothing, we don't want the changelog

View File

@ -0,0 +1 @@
3.0 (quilt)

View File

@ -0,0 +1,2 @@
Tests: upstream-tests
Depends: @, python3-mock, python3-pytest

View File

@ -0,0 +1,14 @@
#!/bin/sh
set -e
if ! [ -d "$AUTOPKGTEST_TMP" ]; then
echo "AUTOPKGTEST_TMP not set." >&2
exit 1
fi
cp -rv tests "$AUTOPKGTEST_TMP"
cd "$AUTOPKGTEST_TMP"
mkdir -v libarchive
touch README.rst
py.test-3 tests -vv -l -r a

View File

@ -0,0 +1,3 @@
version=3
https://pypi.python.org/simple/libarchive-c \
.*/libarchive-c-(.+)\.tar\.gz#.*

View File

@ -0,0 +1,17 @@
from .entry import ArchiveEntry
from .exception import ArchiveError
from .extract import extract_fd, extract_file, extract_memory
from .read import (
custom_reader, fd_reader, file_reader, memory_reader, stream_reader,
seekable_stream_reader
)
from .write import custom_writer, fd_writer, file_writer, memory_writer
__all__ = [x.__name__ for x in (
ArchiveEntry,
ArchiveError,
extract_fd, extract_file, extract_memory,
custom_reader, fd_reader, file_reader, memory_reader, stream_reader,
seekable_stream_reader,
custom_writer, fd_writer, file_writer, memory_writer
)]

View File

@ -0,0 +1,450 @@
from contextlib import contextmanager
from ctypes import create_string_buffer
from enum import IntEnum
import math
from . import ffi
class FileType(IntEnum):
NAMED_PIPE = AE_IFIFO = 0o010000 # noqa: E221
CHAR_DEVICE = AE_IFCHR = 0o020000 # noqa: E221
DIRECTORY = AE_IFDIR = 0o040000 # noqa: E221
BLOCK_DEVICE = AE_IFBLK = 0o060000 # noqa: E221
REGULAR_FILE = AE_IFREG = 0o100000 # noqa: E221
SYMBOLINK_LINK = AE_IFLNK = 0o120000 # noqa: E221
SOCKET = AE_IFSOCK = 0o140000 # noqa: E221
@contextmanager
def new_archive_entry():
entry_p = ffi.entry_new()
try:
yield entry_p
finally:
ffi.entry_free(entry_p)
def format_time(seconds, nanos):
""" return float of seconds.nanos when nanos set, or seconds when not """
if nanos:
return float(seconds) + float(nanos) / 1000000000.0
return int(seconds)
class ArchiveEntry:
__slots__ = ('_archive_p', '_entry_p', 'header_codec')
def __init__(self, archive_p=None, header_codec='utf-8', **attributes):
"""Allocate memory for an `archive_entry` struct.
The `header_codec` is used to decode and encode file paths and other
attributes.
The `**attributes` are passed to the `modify` method.
"""
self._archive_p = archive_p
self._entry_p = ffi.entry_new()
self.header_codec = header_codec
if attributes:
self.modify(**attributes)
def __del__(self):
"""Free the C struct"""
ffi.entry_free(self._entry_p)
def __str__(self):
"""Returns the file's path"""
return self.pathname
def modify(self, header_codec=None, **attributes):
"""Convenience method to modify the entry's attributes.
Args:
filetype (int): the file's type, see the `FileType` class for values
pathname (str): the file's path
linkpath (str): the other path of the file, if the file is a link
size (int | None): the file's size, in bytes
perm (int): the file's permissions in standard Unix format, e.g. 0o640
uid (int): the file owner's numerical identifier
gid (int): the file group's numerical identifier
uname (str | bytes): the file owner's name
gname (str | bytes): the file group's name
atime (int | Tuple[int, int] | float | None):
the file's most recent access time,
either in seconds or as a tuple (seconds, nanoseconds)
mtime (int | Tuple[int, int] | float | None):
the file's most recent modification time,
either in seconds or as a tuple (seconds, nanoseconds)
ctime (int | Tuple[int, int] | float | None):
the file's most recent metadata change time,
either in seconds or as a tuple (seconds, nanoseconds)
birthtime (int | Tuple[int, int] | float | None):
the file's creation time (for archive formats that support it),
either in seconds or as a tuple (seconds, nanoseconds)
rdev (int | Tuple[int, int]): device number, if the file is a device
rdevmajor (int): major part of the device number
rdevminor (int): minor part of the device number
"""
if header_codec:
self.header_codec = header_codec
for name, value in attributes.items():
setattr(self, name, value)
@property
def filetype(self):
return ffi.entry_filetype(self._entry_p)
@filetype.setter
def filetype(self, value):
ffi.entry_set_filetype(self._entry_p, value)
@property
def uid(self):
return ffi.entry_uid(self._entry_p)
@uid.setter
def uid(self, uid):
ffi.entry_set_uid(self._entry_p, uid)
@property
def gid(self):
return ffi.entry_gid(self._entry_p)
@gid.setter
def gid(self, gid):
ffi.entry_set_gid(self._entry_p, gid)
@property
def uname(self):
uname = ffi.entry_uname_w(self._entry_p)
if not uname:
uname = ffi.entry_uname(self._entry_p)
if uname is not None:
try:
uname = uname.decode(self.header_codec)
except UnicodeError:
pass
return uname
@uname.setter
def uname(self, value):
if not isinstance(value, bytes):
value = value.encode(self.header_codec)
if self.header_codec == 'utf-8':
ffi.entry_update_uname_utf8(self._entry_p, value)
else:
ffi.entry_copy_uname(self._entry_p, value)
@property
def gname(self):
gname = ffi.entry_gname_w(self._entry_p)
if not gname:
gname = ffi.entry_gname(self._entry_p)
if gname is not None:
try:
gname = gname.decode(self.header_codec)
except UnicodeError:
pass
return gname
@gname.setter
def gname(self, value):
if not isinstance(value, bytes):
value = value.encode(self.header_codec)
if self.header_codec == 'utf-8':
ffi.entry_update_gname_utf8(self._entry_p, value)
else:
ffi.entry_copy_gname(self._entry_p, value)
def get_blocks(self, block_size=ffi.page_size):
"""Read the file's content, keeping only one chunk in memory at a time.
Don't do anything like `list(entry.get_blocks())`, it would silently fail.
Args:
block_size (int): the buffer's size, in bytes
"""
archive_p = self._archive_p
if not archive_p:
raise TypeError("this entry isn't linked to any content")
buf = create_string_buffer(block_size)
read = ffi.read_data
while 1:
r = read(archive_p, buf, block_size)
if r == 0:
break
yield buf.raw[0:r]
self.__class__ = ConsumedArchiveEntry
@property
def isblk(self):
return self.filetype & 0o170000 == 0o060000
@property
def ischr(self):
return self.filetype & 0o170000 == 0o020000
@property
def isdir(self):
return self.filetype & 0o170000 == 0o040000
@property
def isfifo(self):
return self.filetype & 0o170000 == 0o010000
@property
def islnk(self):
return bool(ffi.entry_hardlink_w(self._entry_p) or
ffi.entry_hardlink(self._entry_p))
@property
def issym(self):
return self.filetype & 0o170000 == 0o120000
@property
def isreg(self):
return self.filetype & 0o170000 == 0o100000
@property
def isfile(self):
return self.isreg
@property
def issock(self):
return self.filetype & 0o170000 == 0o140000
@property
def isdev(self):
return self.ischr or self.isblk or self.isfifo or self.issock
@property
def atime(self):
if not ffi.entry_atime_is_set(self._entry_p):
return None
sec_val = ffi.entry_atime(self._entry_p)
nsec_val = ffi.entry_atime_nsec(self._entry_p)
return format_time(sec_val, nsec_val)
@atime.setter
def atime(self, value):
if value is None:
ffi.entry_unset_atime(self._entry_p)
elif isinstance(value, int):
self.set_atime(value)
elif isinstance(value, tuple):
self.set_atime(*value)
else:
seconds, fraction = math.modf(value)
self.set_atime(int(seconds), int(fraction * 1_000_000_000))
def set_atime(self, timestamp_sec, timestamp_nsec=0):
"Kept for backward compatibility. `entry.atime = ...` is supported now."
return ffi.entry_set_atime(self._entry_p, timestamp_sec, timestamp_nsec)
@property
def mtime(self):
if not ffi.entry_mtime_is_set(self._entry_p):
return None
sec_val = ffi.entry_mtime(self._entry_p)
nsec_val = ffi.entry_mtime_nsec(self._entry_p)
return format_time(sec_val, nsec_val)
@mtime.setter
def mtime(self, value):
if value is None:
ffi.entry_unset_mtime(self._entry_p)
elif isinstance(value, int):
self.set_mtime(value)
elif isinstance(value, tuple):
self.set_mtime(*value)
else:
seconds, fraction = math.modf(value)
self.set_mtime(int(seconds), int(fraction * 1_000_000_000))
def set_mtime(self, timestamp_sec, timestamp_nsec=0):
"Kept for backward compatibility. `entry.mtime = ...` is supported now."
return ffi.entry_set_mtime(self._entry_p, timestamp_sec, timestamp_nsec)
@property
def ctime(self):
if not ffi.entry_ctime_is_set(self._entry_p):
return None
sec_val = ffi.entry_ctime(self._entry_p)
nsec_val = ffi.entry_ctime_nsec(self._entry_p)
return format_time(sec_val, nsec_val)
@ctime.setter
def ctime(self, value):
if value is None:
ffi.entry_unset_ctime(self._entry_p)
elif isinstance(value, int):
self.set_ctime(value)
elif isinstance(value, tuple):
self.set_ctime(*value)
else:
seconds, fraction = math.modf(value)
self.set_ctime(int(seconds), int(fraction * 1_000_000_000))
def set_ctime(self, timestamp_sec, timestamp_nsec=0):
"Kept for backward compatibility. `entry.ctime = ...` is supported now."
return ffi.entry_set_ctime(self._entry_p, timestamp_sec, timestamp_nsec)
@property
def birthtime(self):
if not ffi.entry_birthtime_is_set(self._entry_p):
return None
sec_val = ffi.entry_birthtime(self._entry_p)
nsec_val = ffi.entry_birthtime_nsec(self._entry_p)
return format_time(sec_val, nsec_val)
@birthtime.setter
def birthtime(self, value):
if value is None:
ffi.entry_unset_birthtime(self._entry_p)
elif isinstance(value, int):
self.set_birthtime(value)
elif isinstance(value, tuple):
self.set_birthtime(*value)
else:
seconds, fraction = math.modf(value)
self.set_birthtime(int(seconds), int(fraction * 1_000_000_000))
def set_birthtime(self, timestamp_sec, timestamp_nsec=0):
"Kept for backward compatibility. `entry.birthtime = ...` is supported now."
return ffi.entry_set_birthtime(
self._entry_p, timestamp_sec, timestamp_nsec
)
@property
def pathname(self):
path = ffi.entry_pathname_w(self._entry_p)
if not path:
path = ffi.entry_pathname(self._entry_p)
if path is not None:
try:
path = path.decode(self.header_codec)
except UnicodeError:
pass
return path
@pathname.setter
def pathname(self, value):
if not isinstance(value, bytes):
value = value.encode(self.header_codec)
if self.header_codec == 'utf-8':
ffi.entry_update_pathname_utf8(self._entry_p, value)
else:
ffi.entry_copy_pathname(self._entry_p, value)
@property
def linkpath(self):
path = (
(
ffi.entry_symlink_w(self._entry_p) or
ffi.entry_symlink(self._entry_p)
) if self.issym else (
ffi.entry_hardlink_w(self._entry_p) or
ffi.entry_hardlink(self._entry_p)
)
)
if isinstance(path, bytes):
try:
path = path.decode(self.header_codec)
except UnicodeError:
pass
return path
@linkpath.setter
def linkpath(self, value):
if not isinstance(value, bytes):
value = value.encode(self.header_codec)
if self.header_codec == 'utf-8':
ffi.entry_update_link_utf8(self._entry_p, value)
else:
ffi.entry_copy_link(self._entry_p, value)
# aliases for compatibility with the standard `tarfile` module
path = property(pathname.fget, pathname.fset, doc="alias of pathname")
name = path
linkname = property(linkpath.fget, linkpath.fset, doc="alias of linkpath")
@property
def size(self):
if ffi.entry_size_is_set(self._entry_p):
return ffi.entry_size(self._entry_p)
@size.setter
def size(self, value):
if value is None:
ffi.entry_unset_size(self._entry_p)
else:
ffi.entry_set_size(self._entry_p, value)
@property
def mode(self):
return ffi.entry_mode(self._entry_p)
@mode.setter
def mode(self, value):
ffi.entry_set_mode(self._entry_p, value)
@property
def strmode(self):
"""The file's mode as a string, e.g. '?rwxrwx---'"""
# note we strip the mode because archive_entry_strmode
# returns a trailing space: strcpy(bp, "?rwxrwxrwx ");
return ffi.entry_strmode(self._entry_p).strip()
@property
def perm(self):
return ffi.entry_perm(self._entry_p)
@perm.setter
def perm(self, value):
ffi.entry_set_perm(self._entry_p, value)
@property
def rdev(self):
return ffi.entry_rdev(self._entry_p)
@rdev.setter
def rdev(self, value):
if isinstance(value, tuple):
ffi.entry_set_rdevmajor(self._entry_p, value[0])
ffi.entry_set_rdevminor(self._entry_p, value[1])
else:
ffi.entry_set_rdev(self._entry_p, value)
@property
def rdevmajor(self):
return ffi.entry_rdevmajor(self._entry_p)
@rdevmajor.setter
def rdevmajor(self, value):
ffi.entry_set_rdevmajor(self._entry_p, value)
@property
def rdevminor(self):
return ffi.entry_rdevminor(self._entry_p)
@rdevminor.setter
def rdevminor(self, value):
ffi.entry_set_rdevminor(self._entry_p, value)
class ConsumedArchiveEntry(ArchiveEntry):
__slots__ = ()
def get_blocks(self, **kw):
raise TypeError("the content of this entry has already been read")
class PassedArchiveEntry(ArchiveEntry):
__slots__ = ()
def get_blocks(self, **kw):
raise TypeError("this entry is passed, it's too late to read its content")

View File

@ -0,0 +1,12 @@
class ArchiveError(Exception):
def __init__(self, msg, errno=None, retcode=None, archive_p=None):
self.msg = msg
self.errno = errno
self.retcode = retcode
self.archive_p = archive_p
def __str__(self):
p = '%s (errno=%s, retcode=%s, archive_p=%s)'
return p % (self.msg, self.errno, self.retcode, self.archive_p)

View File

@ -0,0 +1,88 @@
from contextlib import contextmanager
from ctypes import byref, c_longlong, c_size_t, c_void_p
import os
from .ffi import (
write_disk_new, write_disk_set_options, write_free, write_header,
read_data_block, write_data_block, write_finish_entry, ARCHIVE_EOF
)
from .read import fd_reader, file_reader, memory_reader
EXTRACT_OWNER = 0x0001
EXTRACT_PERM = 0x0002
EXTRACT_TIME = 0x0004
EXTRACT_NO_OVERWRITE = 0x0008
EXTRACT_UNLINK = 0x0010
EXTRACT_ACL = 0x0020
EXTRACT_FFLAGS = 0x0040
EXTRACT_XATTR = 0x0080
EXTRACT_SECURE_SYMLINKS = 0x0100
EXTRACT_SECURE_NODOTDOT = 0x0200
EXTRACT_NO_AUTODIR = 0x0400
EXTRACT_NO_OVERWRITE_NEWER = 0x0800
EXTRACT_SPARSE = 0x1000
EXTRACT_MAC_METADATA = 0x2000
EXTRACT_NO_HFS_COMPRESSION = 0x4000
EXTRACT_HFS_COMPRESSION_FORCED = 0x8000
EXTRACT_SECURE_NOABSOLUTEPATHS = 0x10000
EXTRACT_CLEAR_NOCHANGE_FFLAGS = 0x20000
PREVENT_ESCAPE = (
EXTRACT_SECURE_NOABSOLUTEPATHS |
EXTRACT_SECURE_NODOTDOT |
EXTRACT_SECURE_SYMLINKS
)
@contextmanager
def new_archive_write_disk(flags):
archive_p = write_disk_new()
write_disk_set_options(archive_p, flags)
try:
yield archive_p
finally:
write_free(archive_p)
def extract_entries(entries, flags=None):
"""Extracts the given archive entries into the current directory.
"""
if flags is None:
if os.getcwd() == '/':
# If the current directory is the root, then trying to prevent
# escaping is probably undesirable.
flags = 0
else:
flags = PREVENT_ESCAPE
buff, size, offset = c_void_p(), c_size_t(), c_longlong()
buff_p, size_p, offset_p = byref(buff), byref(size), byref(offset)
with new_archive_write_disk(flags) as write_p:
for entry in entries:
write_header(write_p, entry._entry_p)
read_p = entry._archive_p
while 1:
r = read_data_block(read_p, buff_p, size_p, offset_p)
if r == ARCHIVE_EOF:
break
write_data_block(write_p, buff, size, offset)
write_finish_entry(write_p)
def extract_fd(fd, flags=None):
"""Extracts an archive from a file descriptor into the current directory.
"""
with fd_reader(fd) as archive:
extract_entries(archive, flags)
def extract_file(filepath, flags=None):
"""Extracts an archive from a file into the current directory."""
with file_reader(filepath) as archive:
extract_entries(archive, flags)
def extract_memory(buffer_, flags=None):
"""Extracts an archive from memory into the current directory."""
with memory_reader(buffer_) as archive:
extract_entries(archive, flags)

View File

@ -0,0 +1,364 @@
from ctypes import (
c_char_p, c_int, c_uint, c_long, c_longlong, c_size_t, c_int64,
c_void_p, c_wchar_p, CFUNCTYPE, POINTER,
)
try:
from ctypes import c_ssize_t
except ImportError:
from ctypes import c_longlong as c_ssize_t
import ctypes
from ctypes.util import find_library
import logging
import mmap
import os
import sysconfig
from .exception import ArchiveError
logger = logging.getLogger('libarchive')
page_size = mmap.PAGESIZE
libarchive_path = os.environ.get('LIBARCHIVE') or find_library('archive')
libarchive = ctypes.cdll.LoadLibrary(libarchive_path)
# Constants
ARCHIVE_EOF = 1 # Found end of archive.
ARCHIVE_OK = 0 # Operation was successful.
ARCHIVE_RETRY = -10 # Retry might succeed.
ARCHIVE_WARN = -20 # Partial success.
ARCHIVE_FAILED = -25 # Current operation cannot complete.
ARCHIVE_FATAL = -30 # No more operations are possible.
# Callback types
WRITE_CALLBACK = CFUNCTYPE(
c_ssize_t, c_void_p, c_void_p, POINTER(c_void_p), c_size_t
)
READ_CALLBACK = CFUNCTYPE(
c_ssize_t, c_void_p, c_void_p, POINTER(c_void_p)
)
SEEK_CALLBACK = CFUNCTYPE(
c_longlong, c_void_p, c_void_p, c_longlong, c_int
)
OPEN_CALLBACK = CFUNCTYPE(c_int, c_void_p, c_void_p)
CLOSE_CALLBACK = CFUNCTYPE(c_int, c_void_p, c_void_p)
NO_OPEN_CB = ctypes.cast(None, OPEN_CALLBACK)
NO_CLOSE_CB = ctypes.cast(None, CLOSE_CALLBACK)
# Type aliases, for readability
c_archive_p = c_void_p
c_archive_entry_p = c_void_p
if sysconfig.get_config_var('SIZEOF_TIME_T') == 8:
c_time_t = c_int64
else:
c_time_t = c_long
# Helper functions
def _error_string(archive_p):
msg = error_string(archive_p)
if msg is None:
return
try:
return msg.decode('ascii')
except UnicodeDecodeError:
return msg
def archive_error(archive_p, retcode):
msg = _error_string(archive_p)
return ArchiveError(msg, errno(archive_p), retcode, archive_p)
def check_null(ret, func, args):
if ret is None:
raise ArchiveError(func.__name__+' returned NULL')
return ret
def check_int(retcode, func, args):
if retcode >= 0:
return retcode
elif retcode == ARCHIVE_WARN:
logger.warning(_error_string(args[0]))
return retcode
else:
raise archive_error(args[0], retcode)
def ffi(name, argtypes, restype, errcheck=None):
f = getattr(libarchive, 'archive_'+name)
f.argtypes = argtypes
f.restype = restype
if errcheck:
f.errcheck = errcheck
globals()[name] = f
return f
def get_read_format_function(format_name):
function_name = 'read_support_format_' + format_name
func = globals().get(function_name)
if func:
return func
try:
return ffi(function_name, [c_archive_p], c_int, check_int)
except AttributeError:
raise ValueError('the read format %r is not available' % format_name)
def get_read_filter_function(filter_name):
function_name = 'read_support_filter_' + filter_name
func = globals().get(function_name)
if func:
return func
try:
return ffi(function_name, [c_archive_p], c_int, check_int)
except AttributeError:
raise ValueError('the read filter %r is not available' % filter_name)
def get_write_format_function(format_name):
function_name = 'write_set_format_' + format_name
func = globals().get(function_name)
if func:
return func
try:
return ffi(function_name, [c_archive_p], c_int, check_int)
except AttributeError:
raise ValueError('the write format %r is not available' % format_name)
def get_write_filter_function(filter_name):
function_name = 'write_add_filter_' + filter_name
func = globals().get(function_name)
if func:
return func
try:
return ffi(function_name, [c_archive_p], c_int, check_int)
except AttributeError:
raise ValueError('the write filter %r is not available' % filter_name)
# FFI declarations
# library version
version_number = ffi('version_number', [], c_int, check_int)
# archive_util
errno = ffi('errno', [c_archive_p], c_int)
error_string = ffi('error_string', [c_archive_p], c_char_p)
ffi('filter_bytes', [c_archive_p, c_int], c_longlong)
ffi('filter_count', [c_archive_p], c_int)
ffi('filter_name', [c_archive_p, c_int], c_char_p)
ffi('format_name', [c_archive_p], c_char_p)
# archive_entry
ffi('entry_new', [], c_archive_entry_p, check_null)
ffi('entry_filetype', [c_archive_entry_p], c_int)
ffi('entry_atime', [c_archive_entry_p], c_time_t)
ffi('entry_birthtime', [c_archive_entry_p], c_time_t)
ffi('entry_mtime', [c_archive_entry_p], c_time_t)
ffi('entry_ctime', [c_archive_entry_p], c_time_t)
ffi('entry_atime_nsec', [c_archive_entry_p], c_long)
ffi('entry_birthtime_nsec', [c_archive_entry_p], c_long)
ffi('entry_mtime_nsec', [c_archive_entry_p], c_long)
ffi('entry_ctime_nsec', [c_archive_entry_p], c_long)
ffi('entry_atime_is_set', [c_archive_entry_p], c_int)
ffi('entry_birthtime_is_set', [c_archive_entry_p], c_int)
ffi('entry_mtime_is_set', [c_archive_entry_p], c_int)
ffi('entry_ctime_is_set', [c_archive_entry_p], c_int)
ffi('entry_pathname', [c_archive_entry_p], c_char_p)
ffi('entry_pathname_w', [c_archive_entry_p], c_wchar_p)
ffi('entry_sourcepath', [c_archive_entry_p], c_char_p)
ffi('entry_size', [c_archive_entry_p], c_longlong)
ffi('entry_size_is_set', [c_archive_entry_p], c_int)
ffi('entry_mode', [c_archive_entry_p], c_int)
ffi('entry_strmode', [c_archive_entry_p], c_char_p)
ffi('entry_perm', [c_archive_entry_p], c_int)
ffi('entry_hardlink', [c_archive_entry_p], c_char_p)
ffi('entry_hardlink_w', [c_archive_entry_p], c_wchar_p)
ffi('entry_symlink', [c_archive_entry_p], c_char_p)
ffi('entry_symlink_w', [c_archive_entry_p], c_wchar_p)
ffi('entry_rdev', [c_archive_entry_p], c_uint)
ffi('entry_rdevmajor', [c_archive_entry_p], c_uint)
ffi('entry_rdevminor', [c_archive_entry_p], c_uint)
ffi('entry_uid', [c_archive_entry_p], c_longlong)
ffi('entry_gid', [c_archive_entry_p], c_longlong)
ffi('entry_uname', [c_archive_entry_p], c_char_p)
ffi('entry_gname', [c_archive_entry_p], c_char_p)
ffi('entry_uname_w', [c_archive_entry_p], c_wchar_p)
ffi('entry_gname_w', [c_archive_entry_p], c_wchar_p)
ffi('entry_set_size', [c_archive_entry_p, c_longlong], None)
ffi('entry_set_filetype', [c_archive_entry_p, c_uint], None)
ffi('entry_set_uid', [c_archive_entry_p, c_longlong], None)
ffi('entry_set_gid', [c_archive_entry_p, c_longlong], None)
ffi('entry_set_mode', [c_archive_entry_p, c_int], None)
ffi('entry_set_perm', [c_archive_entry_p, c_int], None)
ffi('entry_set_atime', [c_archive_entry_p, c_time_t, c_long], None)
ffi('entry_set_mtime', [c_archive_entry_p, c_time_t, c_long], None)
ffi('entry_set_ctime', [c_archive_entry_p, c_time_t, c_long], None)
ffi('entry_set_birthtime', [c_archive_entry_p, c_time_t, c_long], None)
ffi('entry_set_rdev', [c_archive_entry_p, c_uint], None)
ffi('entry_set_rdevmajor', [c_archive_entry_p, c_uint], None)
ffi('entry_set_rdevminor', [c_archive_entry_p, c_uint], None)
ffi('entry_unset_size', [c_archive_entry_p], None)
ffi('entry_unset_atime', [c_archive_entry_p], None)
ffi('entry_unset_mtime', [c_archive_entry_p], None)
ffi('entry_unset_ctime', [c_archive_entry_p], None)
ffi('entry_unset_birthtime', [c_archive_entry_p], None)
ffi('entry_copy_pathname', [c_archive_entry_p, c_char_p], None)
ffi('entry_update_pathname_utf8', [c_archive_entry_p, c_char_p], c_int, check_int)
ffi('entry_copy_link', [c_archive_entry_p, c_char_p], None)
ffi('entry_update_link_utf8', [c_archive_entry_p, c_char_p], c_int, check_int)
ffi('entry_copy_uname', [c_archive_entry_p, c_char_p], None)
ffi('entry_update_uname_utf8', [c_archive_entry_p, c_char_p], c_int, check_int)
ffi('entry_copy_gname', [c_archive_entry_p, c_char_p], None)
ffi('entry_update_gname_utf8', [c_archive_entry_p, c_char_p], c_int, check_int)
ffi('entry_clear', [c_archive_entry_p], c_archive_entry_p)
ffi('entry_free', [c_archive_entry_p], None)
# archive_read
ffi('read_new', [], c_archive_p, check_null)
READ_FORMATS = set((
'7zip', 'all', 'ar', 'cab', 'cpio', 'empty', 'iso9660', 'lha', 'mtree',
'rar', 'raw', 'tar', 'xar', 'zip', 'warc'
))
for f_name in list(READ_FORMATS):
try:
get_read_format_function(f_name)
except ValueError as e: # pragma: no cover
logger.info(str(e))
READ_FORMATS.remove(f_name)
READ_FILTERS = set((
'all', 'bzip2', 'compress', 'grzip', 'gzip', 'lrzip', 'lzip', 'lzma',
'lzop', 'none', 'rpm', 'uu', 'xz', 'lz4', 'zstd'
))
for f_name in list(READ_FILTERS):
try:
get_read_filter_function(f_name)
except ValueError as e: # pragma: no cover
logger.info(str(e))
READ_FILTERS.remove(f_name)
ffi('read_set_seek_callback', [c_archive_p, SEEK_CALLBACK], c_int, check_int)
ffi('read_open',
[c_archive_p, c_void_p, OPEN_CALLBACK, READ_CALLBACK, CLOSE_CALLBACK],
c_int, check_int)
ffi('read_open_fd', [c_archive_p, c_int, c_size_t], c_int, check_int)
ffi('read_open_filename_w', [c_archive_p, c_wchar_p, c_size_t],
c_int, check_int)
ffi('read_open_memory', [c_archive_p, c_void_p, c_size_t], c_int, check_int)
ffi('read_next_header', [c_archive_p, POINTER(c_void_p)], c_int, check_int)
ffi('read_next_header2', [c_archive_p, c_void_p], c_int, check_int)
ffi('read_close', [c_archive_p], c_int, check_int)
ffi('read_free', [c_archive_p], c_int, check_int)
# archive_read_disk
ffi('read_disk_new', [], c_archive_p, check_null)
ffi('read_disk_set_behavior', [c_archive_p, c_int], c_int, check_int)
ffi('read_disk_set_standard_lookup', [c_archive_p], c_int, check_int)
ffi('read_disk_open', [c_archive_p, c_char_p], c_int, check_int)
ffi('read_disk_open_w', [c_archive_p, c_wchar_p], c_int, check_int)
ffi('read_disk_descend', [c_archive_p], c_int, check_int)
# archive_read_data
ffi('read_data_block',
[c_archive_p, POINTER(c_void_p), POINTER(c_size_t), POINTER(c_longlong)],
c_int, check_int)
ffi('read_data', [c_archive_p, c_void_p, c_size_t], c_ssize_t, check_int)
ffi('read_data_skip', [c_archive_p], c_int, check_int)
# archive_write
ffi('write_new', [], c_archive_p, check_null)
ffi('write_set_options', [c_archive_p, c_char_p], c_int, check_int)
ffi('write_disk_new', [], c_archive_p, check_null)
ffi('write_disk_set_options', [c_archive_p, c_int], c_int, check_int)
WRITE_FORMATS = set((
'7zip', 'ar_bsd', 'ar_svr4', 'cpio', 'cpio_newc', 'gnutar', 'iso9660',
'mtree', 'mtree_classic', 'pax', 'pax_restricted', 'shar', 'shar_dump',
'ustar', 'v7tar', 'xar', 'zip', 'warc'
))
for f_name in list(WRITE_FORMATS):
try:
get_write_format_function(f_name)
except ValueError as e: # pragma: no cover
logger.info(str(e))
WRITE_FORMATS.remove(f_name)
WRITE_FILTERS = set((
'b64encode', 'bzip2', 'compress', 'grzip', 'gzip', 'lrzip', 'lzip', 'lzma',
'lzop', 'uuencode', 'xz', 'lz4', 'zstd'
))
for f_name in list(WRITE_FILTERS):
try:
get_write_filter_function(f_name)
except ValueError as e: # pragma: no cover
logger.info(str(e))
WRITE_FILTERS.remove(f_name)
ffi('write_open',
[c_archive_p, c_void_p, OPEN_CALLBACK, WRITE_CALLBACK, CLOSE_CALLBACK],
c_int, check_int)
ffi('write_open_fd', [c_archive_p, c_int], c_int, check_int)
ffi('write_open_filename', [c_archive_p, c_char_p], c_int, check_int)
ffi('write_open_filename_w', [c_archive_p, c_wchar_p], c_int, check_int)
ffi('write_open_memory',
[c_archive_p, c_void_p, c_size_t, POINTER(c_size_t)],
c_int, check_int)
ffi('write_get_bytes_in_last_block', [c_archive_p], c_int, check_int)
ffi('write_get_bytes_per_block', [c_archive_p], c_int, check_int)
ffi('write_set_bytes_in_last_block', [c_archive_p, c_int], c_int, check_int)
ffi('write_set_bytes_per_block', [c_archive_p, c_int], c_int, check_int)
ffi('write_header', [c_archive_p, c_void_p], c_int, check_int)
ffi('write_data', [c_archive_p, c_void_p, c_size_t], c_ssize_t, check_int)
ffi('write_data_block', [c_archive_p, c_void_p, c_size_t, c_longlong],
c_int, check_int)
ffi('write_finish_entry', [c_archive_p], c_int, check_int)
ffi('write_fail', [c_archive_p], c_int, check_int)
ffi('write_close', [c_archive_p], c_int, check_int)
ffi('write_free', [c_archive_p], c_int, check_int)
# archive encryption
try:
ffi('read_add_passphrase', [c_archive_p, c_char_p], c_int, check_int)
ffi('write_set_passphrase', [c_archive_p, c_char_p], c_int, check_int)
except AttributeError:
logger.info(
f"the libarchive being used (version {version_number()}, "
f"path {libarchive_path}) doesn't support encryption"
)

View File

@ -0,0 +1,7 @@
READDISK_RESTORE_ATIME = 0x0001
READDISK_HONOR_NODUMP = 0x0002
READDISK_MAC_COPYFILE = 0x0004
READDISK_NO_TRAVERSE_MOUNTS = 0x0008
READDISK_NO_XATTR = 0x0010
READDISK_NO_ACL = 0x0020
READDISK_NO_FFLAGS = 0x0040

View File

@ -0,0 +1,176 @@
from contextlib import contextmanager
from ctypes import cast, c_void_p, POINTER, create_string_buffer
from os import fstat, stat
from . import ffi
from .ffi import (
ARCHIVE_EOF, OPEN_CALLBACK, READ_CALLBACK, CLOSE_CALLBACK, SEEK_CALLBACK,
NO_OPEN_CB, NO_CLOSE_CB, page_size,
)
from .entry import ArchiveEntry, PassedArchiveEntry
class ArchiveRead:
def __init__(self, archive_p, header_codec='utf-8'):
self._pointer = archive_p
self.header_codec = header_codec
def __iter__(self):
"""Iterates through an archive's entries.
"""
archive_p = self._pointer
header_codec = self.header_codec
read_next_header2 = ffi.read_next_header2
while 1:
entry = ArchiveEntry(archive_p, header_codec)
r = read_next_header2(archive_p, entry._entry_p)
if r == ARCHIVE_EOF:
return
yield entry
entry.__class__ = PassedArchiveEntry
@property
def bytes_read(self):
return ffi.filter_bytes(self._pointer, -1)
@property
def filter_names(self):
count = ffi.filter_count(self._pointer)
return [ffi.filter_name(self._pointer, i) for i in range(count - 1)]
@property
def format_name(self):
return ffi.format_name(self._pointer)
@contextmanager
def new_archive_read(format_name='all', filter_name='all', passphrase=None):
"""Creates an archive struct suitable for reading from an archive.
Returns a pointer if successful. Raises ArchiveError on error.
"""
archive_p = ffi.read_new()
try:
if passphrase:
if not isinstance(passphrase, bytes):
passphrase = passphrase.encode('utf-8')
try:
ffi.read_add_passphrase(archive_p, passphrase)
except AttributeError:
raise NotImplementedError(
f"the libarchive being used (version {ffi.version_number()}, "
f"path {ffi.libarchive_path}) doesn't support encryption"
)
ffi.get_read_filter_function(filter_name)(archive_p)
ffi.get_read_format_function(format_name)(archive_p)
yield archive_p
finally:
ffi.read_free(archive_p)
@contextmanager
def custom_reader(
read_func, format_name='all', filter_name='all',
open_func=None, seek_func=None, close_func=None,
block_size=page_size, archive_read_class=ArchiveRead, passphrase=None,
header_codec='utf-8',
):
"""Read an archive using a custom function.
"""
open_cb = OPEN_CALLBACK(open_func) if open_func else NO_OPEN_CB
read_cb = READ_CALLBACK(read_func)
close_cb = CLOSE_CALLBACK(close_func) if close_func else NO_CLOSE_CB
seek_cb = SEEK_CALLBACK(seek_func)
with new_archive_read(format_name, filter_name, passphrase) as archive_p:
if seek_func:
ffi.read_set_seek_callback(archive_p, seek_cb)
ffi.read_open(archive_p, None, open_cb, read_cb, close_cb)
yield archive_read_class(archive_p, header_codec)
@contextmanager
def fd_reader(
fd, format_name='all', filter_name='all', block_size=4096, passphrase=None,
header_codec='utf-8',
):
"""Read an archive from a file descriptor.
"""
with new_archive_read(format_name, filter_name, passphrase) as archive_p:
try:
block_size = fstat(fd).st_blksize
except (OSError, AttributeError): # pragma: no cover
pass
ffi.read_open_fd(archive_p, fd, block_size)
yield ArchiveRead(archive_p, header_codec)
@contextmanager
def file_reader(
path, format_name='all', filter_name='all', block_size=4096, passphrase=None,
header_codec='utf-8',
):
"""Read an archive from a file.
"""
with new_archive_read(format_name, filter_name, passphrase) as archive_p:
try:
block_size = stat(path).st_blksize
except (OSError, AttributeError): # pragma: no cover
pass
ffi.read_open_filename_w(archive_p, path, block_size)
yield ArchiveRead(archive_p, header_codec)
@contextmanager
def memory_reader(
buf, format_name='all', filter_name='all', passphrase=None,
header_codec='utf-8',
):
"""Read an archive from memory.
"""
with new_archive_read(format_name, filter_name, passphrase) as archive_p:
ffi.read_open_memory(archive_p, cast(buf, c_void_p), len(buf))
yield ArchiveRead(archive_p, header_codec)
@contextmanager
def stream_reader(
stream, format_name='all', filter_name='all', block_size=page_size,
passphrase=None, header_codec='utf-8',
):
"""Read an archive from a stream.
The `stream` object must support the standard `readinto` method.
If `stream.seekable()` returns `True`, then an appropriate seek callback is
passed to libarchive.
"""
buf = create_string_buffer(block_size)
buf_p = cast(buf, c_void_p)
def read_func(archive_p, context, ptrptr):
# readinto the buffer, returns number of bytes read
length = stream.readinto(buf)
# write the address of the buffer into the pointer
ptrptr = cast(ptrptr, POINTER(c_void_p))
ptrptr[0] = buf_p
# tell libarchive how much data was written into the buffer
return length
def seek_func(archive_p, context, offset, whence):
stream.seek(offset, whence)
# tell libarchive the current position
return stream.tell()
open_cb = NO_OPEN_CB
read_cb = READ_CALLBACK(read_func)
close_cb = NO_CLOSE_CB
seek_cb = SEEK_CALLBACK(seek_func)
with new_archive_read(format_name, filter_name, passphrase) as archive_p:
if stream.seekable():
ffi.read_set_seek_callback(archive_p, seek_cb)
ffi.read_open(archive_p, None, open_cb, read_cb, close_cb)
yield ArchiveRead(archive_p, header_codec)
seekable_stream_reader = stream_reader

View File

@ -0,0 +1,279 @@
from contextlib import contextmanager
from ctypes import byref, cast, c_char, c_size_t, c_void_p, POINTER
from posixpath import join
import warnings
from . import ffi
from .entry import ArchiveEntry, FileType
from .ffi import (
OPEN_CALLBACK, WRITE_CALLBACK, CLOSE_CALLBACK, NO_OPEN_CB, NO_CLOSE_CB,
ARCHIVE_EOF,
page_size, entry_sourcepath, entry_clear, read_disk_new, read_disk_open_w,
read_next_header2, read_disk_descend, read_free, write_header, write_data,
write_finish_entry,
read_disk_set_behavior
)
@contextmanager
def new_archive_read_disk(path, flags=0, lookup=False):
archive_p = read_disk_new()
read_disk_set_behavior(archive_p, flags)
if lookup:
ffi.read_disk_set_standard_lookup(archive_p)
read_disk_open_w(archive_p, path)
try:
yield archive_p
finally:
read_free(archive_p)
class ArchiveWrite:
def __init__(self, archive_p, header_codec='utf-8'):
self._pointer = archive_p
self.header_codec = header_codec
def add_entries(self, entries):
"""Add the given entries to the archive.
"""
write_p = self._pointer
for entry in entries:
write_header(write_p, entry._entry_p)
for block in entry.get_blocks():
write_data(write_p, block, len(block))
write_finish_entry(write_p)
def add_files(
self, *paths, flags=0, lookup=False, pathname=None, recursive=True,
**attributes
):
"""Read files through the OS and add them to the archive.
Args:
paths (str): the paths of the files to add to the archive
flags (int):
passed to the C function `archive_read_disk_set_behavior`;
use the `libarchive.flags.READDISK_*` constants
lookup (bool):
when True, the C function `archive_read_disk_set_standard_lookup`
is called to enable the lookup of user and group names
pathname (str | None):
the path of the file in the archive, defaults to the source path
recursive (bool):
when False, if a path in `paths` is a directory,
only the directory itself is added.
attributes (dict): passed to `ArchiveEntry.modify()`
Raises:
ArchiveError: if a file doesn't exist or can't be accessed, or if
adding it to the archive fails
"""
write_p = self._pointer
block_size = ffi.write_get_bytes_per_block(write_p)
if block_size <= 0:
block_size = 10240 # pragma: no cover
entry = ArchiveEntry(header_codec=self.header_codec)
entry_p = entry._entry_p
destination_path = attributes.pop('pathname', None)
for path in paths:
with new_archive_read_disk(path, flags, lookup) as read_p:
while 1:
r = read_next_header2(read_p, entry_p)
if r == ARCHIVE_EOF:
break
entry_path = entry.pathname
if destination_path:
if entry_path == path:
entry_path = destination_path
else:
assert entry_path.startswith(path)
entry_path = join(
destination_path,
entry_path[len(path):].lstrip('/')
)
entry.pathname = entry_path.lstrip('/')
if attributes:
entry.modify(**attributes)
read_disk_descend(read_p)
write_header(write_p, entry_p)
if entry.isreg:
with open(entry_sourcepath(entry_p), 'rb') as f:
while 1:
data = f.read(block_size)
if not data:
break
write_data(write_p, data, len(data))
write_finish_entry(write_p)
entry_clear(entry_p)
if not recursive:
break
def add_file(self, path, **kw):
"Single-path alias of `add_files()`"
return self.add_files(path, **kw)
def add_file_from_memory(
self, entry_path, entry_size, entry_data,
filetype=FileType.REGULAR_FILE, permission=0o664,
**other_attributes
):
""""Add file from memory to archive.
Args:
entry_path (str | bytes): the file's path
entry_size (int): the file's size, in bytes
entry_data (bytes | Iterable[bytes]): the file's content
filetype (int): see `libarchive.entry.ArchiveEntry.modify()`
permission (int): see `libarchive.entry.ArchiveEntry.modify()`
other_attributes: see `libarchive.entry.ArchiveEntry.modify()`
"""
archive_pointer = self._pointer
if isinstance(entry_data, bytes):
entry_data = (entry_data,)
elif isinstance(entry_data, str):
raise TypeError(
"entry_data: expected bytes, got %r" % type(entry_data)
)
entry = ArchiveEntry(
pathname=entry_path, size=entry_size, filetype=filetype,
perm=permission, header_codec=self.header_codec,
**other_attributes
)
write_header(archive_pointer, entry._entry_p)
for chunk in entry_data:
if not chunk:
break
write_data(archive_pointer, chunk, len(chunk))
write_finish_entry(archive_pointer)
@contextmanager
def new_archive_write(format_name, filter_name=None, options='', passphrase=None):
archive_p = ffi.write_new()
try:
ffi.get_write_format_function(format_name)(archive_p)
if filter_name:
ffi.get_write_filter_function(filter_name)(archive_p)
if passphrase and 'encryption' not in options:
if format_name == 'zip':
warnings.warn(
"The default encryption scheme of zip archives is weak. "
"Use `options='encryption=$type'` to specify the encryption "
"type you want to use. The supported values are 'zipcrypt' "
"(the weak default), 'aes128' and 'aes256'."
)
options += ',encryption' if options else 'encryption'
if options:
if not isinstance(options, bytes):
options = options.encode('utf-8')
ffi.write_set_options(archive_p, options)
if passphrase:
if not isinstance(passphrase, bytes):
passphrase = passphrase.encode('utf-8')
try:
ffi.write_set_passphrase(archive_p, passphrase)
except AttributeError:
raise NotImplementedError(
f"the libarchive being used (version {ffi.version_number()}, "
f"path {ffi.libarchive_path}) doesn't support encryption"
)
yield archive_p
ffi.write_close(archive_p)
ffi.write_free(archive_p)
except Exception:
ffi.write_fail(archive_p)
ffi.write_free(archive_p)
raise
@property
def bytes_written(self):
return ffi.filter_bytes(self._pointer, -1)
@contextmanager
def custom_writer(
write_func, format_name, filter_name=None,
open_func=None, close_func=None, block_size=page_size,
archive_write_class=ArchiveWrite, options='', passphrase=None,
header_codec='utf-8',
):
"""Create an archive and send it in chunks to the `write_func` function.
For formats and filters, see `WRITE_FORMATS` and `WRITE_FILTERS` in the
`libarchive.ffi` module.
"""
def write_cb_internal(archive_p, context, buffer_, length):
data = cast(buffer_, POINTER(c_char * length))[0]
return write_func(data)
open_cb = OPEN_CALLBACK(open_func) if open_func else NO_OPEN_CB
write_cb = WRITE_CALLBACK(write_cb_internal)
close_cb = CLOSE_CALLBACK(close_func) if close_func else NO_CLOSE_CB
with new_archive_write(format_name, filter_name, options,
passphrase) as archive_p:
ffi.write_set_bytes_in_last_block(archive_p, 1)
ffi.write_set_bytes_per_block(archive_p, block_size)
ffi.write_open(archive_p, None, open_cb, write_cb, close_cb)
yield archive_write_class(archive_p, header_codec)
@contextmanager
def fd_writer(
fd, format_name, filter_name=None,
archive_write_class=ArchiveWrite, options='', passphrase=None,
header_codec='utf-8',
):
"""Create an archive and write it into a file descriptor.
For formats and filters, see `WRITE_FORMATS` and `WRITE_FILTERS` in the
`libarchive.ffi` module.
"""
with new_archive_write(format_name, filter_name, options,
passphrase) as archive_p:
ffi.write_open_fd(archive_p, fd)
yield archive_write_class(archive_p, header_codec)
@contextmanager
def file_writer(
filepath, format_name, filter_name=None,
archive_write_class=ArchiveWrite, options='', passphrase=None,
header_codec='utf-8',
):
"""Create an archive and write it into a file.
For formats and filters, see `WRITE_FORMATS` and `WRITE_FILTERS` in the
`libarchive.ffi` module.
"""
with new_archive_write(format_name, filter_name, options,
passphrase) as archive_p:
ffi.write_open_filename_w(archive_p, filepath)
yield archive_write_class(archive_p, header_codec)
@contextmanager
def memory_writer(
buf, format_name, filter_name=None,
archive_write_class=ArchiveWrite, options='', passphrase=None,
header_codec='utf-8',
):
"""Create an archive and write it into a buffer.
For formats and filters, see `WRITE_FORMATS` and `WRITE_FILTERS` in the
`libarchive.ffi` module.
"""
with new_archive_write(format_name, filter_name, options,
passphrase) as archive_p:
used = byref(c_size_t())
buf_p = cast(buf, c_void_p)
ffi.write_open_memory(archive_p, buf_p, len(buf), used)
yield archive_write_class(archive_p, header_codec)

View File

@ -0,0 +1,12 @@
[wheel]
universal = 1
[flake8]
exclude = .?*,env*/
ignore = E226,E731,W504
max-line-length = 85
[egg_info]
tag_build =
tag_date = 0

View File

@ -0,0 +1,25 @@
import os
from os.path import join, dirname
from setuptools import setup, find_packages
from version import get_version
os.umask(0o022)
with open(join(dirname(__file__), 'README.rst'), encoding="utf-8") as f:
README = f.read()
setup(
name='libarchive-c',
version=get_version(),
description='Python interface to libarchive',
author='Changaco',
author_email='changaco@changaco.oy.lc',
url='https://github.com/Changaco/python-libarchive-c',
license='CC0',
packages=find_packages(exclude=['tests']),
long_description=README,
long_description_content_type='text/x-rst',
keywords='archive libarchive 7z tar bz2 zip gz',
)

View File

@ -0,0 +1,136 @@
from contextlib import closing, contextmanager
from copy import copy
from os import chdir, getcwd, stat, walk
from os.path import abspath, dirname, join
from stat import S_ISREG
import tarfile
try:
from stat import filemode
except ImportError: # Python 2
filemode = tarfile.filemode
from libarchive import file_reader
data_dir = join(dirname(__file__), 'data')
def check_archive(archive, tree):
tree2 = copy(tree)
for e in archive:
epath = str(e).rstrip('/')
assert epath in tree2
estat = tree2.pop(epath)
assert e.mtime == int(estat['mtime'])
if not e.isdir:
size = e.size
if size is not None:
assert size == estat['size']
with open(epath, 'rb') as f:
for block in e.get_blocks():
assert f.read(len(block)) == block
leftover = f.read()
assert not leftover
# Check that there are no missing directories or files
assert len(tree2) == 0
def get_entries(location):
"""
Using the archive file at `location`, return an iterable of name->value
mappings for each libarchive.ArchiveEntry objects essential attributes.
Paths are base64-encoded because JSON is UTF-8 and cannot handle
arbitrary binary pathdata.
"""
with file_reader(location) as arch:
for entry in arch:
# libarchive introduces prefixes such as h prefix for
# hardlinks: tarfile does not, so we ignore the first char
mode = entry.strmode[1:].decode('ascii')
yield {
'path': surrogate_decode(entry.pathname),
'mtime': entry.mtime,
'size': entry.size,
'mode': mode,
'isreg': entry.isreg,
'isdir': entry.isdir,
'islnk': entry.islnk,
'issym': entry.issym,
'linkpath': surrogate_decode(entry.linkpath),
'isblk': entry.isblk,
'ischr': entry.ischr,
'isfifo': entry.isfifo,
'isdev': entry.isdev,
'uid': entry.uid,
'gid': entry.gid
}
def get_tarinfos(location):
"""
Using the tar archive file at `location`, return an iterable of
name->value mappings for each tarfile.TarInfo objects essential
attributes.
Paths are base64-encoded because JSON is UTF-8 and cannot handle
arbitrary binary pathdata.
"""
with closing(tarfile.open(location)) as tar:
for entry in tar:
path = surrogate_decode(entry.path or '')
if entry.isdir() and not path.endswith('/'):
path += '/'
# libarchive introduces prefixes such as h prefix for
# hardlinks: tarfile does not, so we ignore the first char
mode = filemode(entry.mode)[1:]
yield {
'path': path,
'mtime': entry.mtime,
'size': entry.size,
'mode': mode,
'isreg': entry.isreg(),
'isdir': entry.isdir(),
'islnk': entry.islnk(),
'issym': entry.issym(),
'linkpath': surrogate_decode(entry.linkpath or None),
'isblk': entry.isblk(),
'ischr': entry.ischr(),
'isfifo': entry.isfifo(),
'isdev': entry.isdev(),
'uid': entry.uid,
'gid': entry.gid
}
@contextmanager
def in_dir(dirpath):
prev = abspath(getcwd())
chdir(dirpath)
try:
yield
finally:
chdir(prev)
def stat_dict(path):
keys = set(('uid', 'gid', 'mtime'))
mode, _, _, _, uid, gid, size, _, mtime, _ = stat(path)
if S_ISREG(mode):
keys.add('size')
return {k: v for k, v in locals().items() if k in keys}
def treestat(d, stat_dict=stat_dict):
r = {}
for dirpath, dirnames, filenames in walk(d):
r[dirpath] = stat_dict(dirpath)
for fname in filenames:
fpath = join(dirpath, fname)
r[fpath] = stat_dict(fpath)
return r
def surrogate_decode(o):
if isinstance(o, bytes):
return o.decode('utf8', errors='surrogateescape')
return o

View File

@ -0,0 +1,3 @@
This test file is borrowed from Python codebase and test suite.
This is a trick Tar with several weird and malformed entries:
https://hg.python.org/cpython/file/bff88c866886/Lib/test/testtar.tar

View File

@ -0,0 +1,665 @@
[
{
"gid": 100,
"isblk": false,
"ischr": false,
"isdev": false,
"isdir": false,
"isfifo": false,
"islnk": false,
"isreg": true,
"issym": false,
"linkpath": null,
"mode": "rw-r--r--",
"mtime": 1041808783,
"path": "ustar/conttype",
"size": 7011,
"uid": 1000
},
{
"gid": 100,
"isblk": false,
"ischr": false,
"isdev": false,
"isdir": false,
"isfifo": false,
"islnk": false,
"isreg": true,
"issym": false,
"linkpath": null,
"mode": "rw-r--r--",
"mtime": 1041808783,
"path": "ustar/regtype",
"size": 7011,
"uid": 1000
},
{
"gid": 100,
"isblk": false,
"ischr": false,
"isdev": false,
"isdir": true,
"isfifo": false,
"islnk": false,
"isreg": false,
"issym": false,
"linkpath": null,
"mode": "rwxr-xr-x",
"mtime": 1041808783,
"path": "ustar/dirtype/",
"size": 0,
"uid": 1000
},
{
"gid": 100,
"isblk": false,
"ischr": false,
"isdev": false,
"isdir": true,
"isfifo": false,
"islnk": false,
"isreg": false,
"issym": false,
"linkpath": null,
"mode": "rwxr-xr-x",
"mtime": 1041808783,
"path": "ustar/dirtype-with-size/",
"size": 0,
"uid": 1000
},
{
"gid": 100,
"isblk": false,
"ischr": false,
"isdev": false,
"isdir": false,
"isfifo": false,
"islnk": true,
"isreg": false,
"issym": false,
"linkpath": "ustar/regtype",
"mode": "rw-r--r--",
"mtime": 1041808783,
"path": "ustar/lnktype",
"size": 0,
"uid": 1000
},
{
"gid": 100,
"isblk": false,
"ischr": false,
"isdev": false,
"isdir": false,
"isfifo": false,
"islnk": false,
"isreg": false,
"issym": true,
"linkpath": "regtype",
"mode": "rwxrwxrwx",
"mtime": 1041808783,
"path": "ustar/symtype",
"size": 0,
"uid": 1000
},
{
"gid": 100,
"isblk": true,
"ischr": false,
"isdev": true,
"isdir": false,
"isfifo": false,
"islnk": false,
"isreg": false,
"issym": false,
"linkpath": null,
"mode": "rw-rw----",
"mtime": 1041808783,
"path": "ustar/blktype",
"size": 0,
"uid": 1000
},
{
"gid": 100,
"isblk": false,
"ischr": true,
"isdev": true,
"isdir": false,
"isfifo": false,
"islnk": false,
"isreg": false,
"issym": false,
"linkpath": null,
"mode": "rw-rw-rw-",
"mtime": 1041808783,
"path": "ustar/chrtype",
"size": 0,
"uid": 1000
},
{
"gid": 100,
"isblk": false,
"ischr": false,
"isdev": true,
"isdir": false,
"isfifo": true,
"islnk": false,
"isreg": false,
"issym": false,
"linkpath": null,
"mode": "rw-r--r--",
"mtime": 1041808783,
"path": "ustar/fifotype",
"size": 0,
"uid": 1000
},
{
"gid": 100,
"isblk": false,
"ischr": false,
"isdev": false,
"isdir": false,
"isfifo": false,
"islnk": false,
"isreg": true,
"issym": false,
"linkpath": null,
"mode": "rw-r--r--",
"mtime": 1041808783,
"path": "ustar/sparse",
"size": 86016,
"uid": 1000
},
{
"gid": 100,
"isblk": false,
"ischr": false,
"isdev": false,
"isdir": false,
"isfifo": false,
"islnk": false,
"isreg": true,
"issym": false,
"linkpath": null,
"mode": "rw-r--r--",
"mtime": 1041808783,
"path": "ustar/umlauts-\udcc4\udcd6\udcdc\udce4\udcf6\udcfc\udcdf",
"size": 7011,
"uid": 1000
},
{
"gid": 100,
"isblk": false,
"ischr": false,
"isdev": false,
"isdir": false,
"isfifo": false,
"islnk": false,
"isreg": true,
"issym": false,
"linkpath": null,
"mode": "rw-r--r--",
"mtime": 1041808783,
"path": "ustar/12345/12345/12345/12345/12345/12345/12345/12345/12345/12345/12345/12345/12345/12345/12345/12345/12345/12345/12345/12345/12345/12345/12345/12345/12345/12345/12345/12345/12345/12345/12345/12345/12345/12345/12345/12345/12345/12345/12345/1234567/longname",
"size": 7011,
"uid": 1000
},
{
"gid": 100,
"isblk": false,
"ischr": false,
"isdev": false,
"isdir": false,
"isfifo": false,
"islnk": false,
"isreg": false,
"issym": true,
"linkpath": "../linktest1/regtype",
"mode": "rwxrwxrwx",
"mtime": 1041808783,
"path": "./ustar/linktest2/symtype",
"size": 0,
"uid": 1000
},
{
"gid": 100,
"isblk": false,
"ischr": false,
"isdev": false,
"isdir": false,
"isfifo": false,
"islnk": false,
"isreg": true,
"issym": false,
"linkpath": null,
"mode": "rw-r--r--",
"mtime": 1041808783,
"path": "ustar/linktest1/regtype",
"size": 7011,
"uid": 1000
},
{
"gid": 100,
"isblk": false,
"ischr": false,
"isdev": false,
"isdir": false,
"isfifo": false,
"islnk": true,
"isreg": false,
"issym": false,
"linkpath": "./ustar/linktest1/regtype",
"mode": "rw-r--r--",
"mtime": 1041808783,
"path": "./ustar/linktest2/lnktype",
"size": 0,
"uid": 1000
},
{
"gid": 100,
"isblk": false,
"ischr": false,
"isdev": false,
"isdir": false,
"isfifo": false,
"islnk": false,
"isreg": false,
"issym": true,
"linkpath": "ustar/regtype",
"mode": "rwxrwxrwx",
"mtime": 1041808783,
"path": "symtype2",
"size": 0,
"uid": 1000
},
{
"gid": 100,
"isblk": false,
"ischr": false,
"isdev": false,
"isdir": false,
"isfifo": false,
"islnk": false,
"isreg": true,
"issym": false,
"linkpath": null,
"mode": "rw-r--r--",
"mtime": 1041808783,
"path": "gnu/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/longname",
"size": 7011,
"uid": 1000
},
{
"gid": 100,
"isblk": false,
"ischr": false,
"isdev": false,
"isdir": false,
"isfifo": false,
"islnk": true,
"isreg": false,
"issym": false,
"linkpath": "gnu/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/longname",
"mode": "rw-r--r--",
"mtime": 1041808783,
"path": "gnu/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/longlink",
"size": 0,
"uid": 1000
},
{
"gid": 100,
"isblk": false,
"ischr": false,
"isdev": false,
"isdir": false,
"isfifo": false,
"islnk": false,
"isreg": true,
"issym": false,
"linkpath": null,
"mode": "rw-r--r--",
"mtime": 1041808783,
"path": "gnu/sparse",
"size": 86016,
"uid": 1000
},
{
"gid": 100,
"isblk": false,
"ischr": false,
"isdev": false,
"isdir": false,
"isfifo": false,
"islnk": false,
"isreg": true,
"issym": false,
"linkpath": null,
"mode": "rw-r--r--",
"mtime": 1041808783,
"path": "gnu/sparse-0.0",
"size": 86016,
"uid": 1000
},
{
"gid": 100,
"isblk": false,
"ischr": false,
"isdev": false,
"isdir": false,
"isfifo": false,
"islnk": false,
"isreg": true,
"issym": false,
"linkpath": null,
"mode": "rw-r--r--",
"mtime": 1041808783,
"path": "gnu/sparse-0.1",
"size": 86016,
"uid": 1000
},
{
"gid": 100,
"isblk": false,
"ischr": false,
"isdev": false,
"isdir": false,
"isfifo": false,
"islnk": false,
"isreg": true,
"issym": false,
"linkpath": null,
"mode": "rw-r--r--",
"mtime": 1041808783,
"path": "gnu/sparse-1.0",
"size": 86016,
"uid": 1000
},
{
"gid": 4294967295,
"isblk": false,
"ischr": false,
"isdev": false,
"isdir": false,
"isfifo": false,
"islnk": false,
"isreg": true,
"issym": false,
"linkpath": null,
"mode": "rw-r--r--",
"mtime": 1041808783,
"path": "gnu/regtype-gnu-uid",
"size": 7011,
"uid": 4294967295
},
{
"gid": 100,
"isblk": false,
"ischr": false,
"isdev": false,
"isdir": false,
"isfifo": false,
"islnk": false,
"isreg": true,
"issym": false,
"linkpath": null,
"mode": "rw-r--r--",
"mtime": 1041808783,
"path": "misc/regtype-old-v7",
"size": 7011,
"uid": 1000
},
{
"gid": 100,
"isblk": false,
"ischr": false,
"isdev": false,
"isdir": false,
"isfifo": false,
"islnk": false,
"isreg": true,
"issym": false,
"linkpath": null,
"mode": "rw-r--r--",
"mtime": 1041808783,
"path": "misc/regtype-hpux-signed-chksum-\udcc4\udcd6\udcdc\udce4\udcf6\udcfc\udcdf",
"size": 7011,
"uid": 1000
},
{
"gid": 100,
"isblk": false,
"ischr": false,
"isdev": false,
"isdir": false,
"isfifo": false,
"islnk": false,
"isreg": true,
"issym": false,
"linkpath": null,
"mode": "rw-r--r--",
"mtime": 1041808783,
"path": "misc/regtype-old-v7-signed-chksum-\udcc4\udcd6\udcdc\udce4\udcf6\udcfc\udcdf",
"size": 7011,
"uid": 1000
},
{
"gid": 100,
"isblk": false,
"ischr": false,
"isdev": false,
"isdir": true,
"isfifo": false,
"islnk": false,
"isreg": false,
"issym": false,
"linkpath": null,
"mode": "rwxr-xr-x",
"mtime": 1041808783,
"path": "misc/dirtype-old-v7/",
"size": 0,
"uid": 1000
},
{
"gid": 100,
"isblk": false,
"ischr": false,
"isdev": false,
"isdir": false,
"isfifo": false,
"islnk": false,
"isreg": true,
"issym": false,
"linkpath": null,
"mode": "rw-r--r--",
"mtime": 1041808783,
"path": "misc/regtype-suntar",
"size": 7011,
"uid": 1000
},
{
"gid": 100,
"isblk": false,
"ischr": false,
"isdev": false,
"isdir": false,
"isfifo": false,
"islnk": false,
"isreg": true,
"issym": false,
"linkpath": null,
"mode": "rw-r--r--",
"mtime": 1041808783,
"path": "misc/regtype-xstar",
"size": 7011,
"uid": 1000
},
{
"gid": 100,
"isblk": false,
"ischr": false,
"isdev": false,
"isdir": false,
"isfifo": false,
"islnk": false,
"isreg": true,
"issym": false,
"linkpath": null,
"mode": "rw-r--r--",
"mtime": 1041808783,
"path": "pax/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/longname",
"size": 7011,
"uid": 1000
},
{
"gid": 100,
"isblk": false,
"ischr": false,
"isdev": false,
"isdir": false,
"isfifo": false,
"islnk": true,
"isreg": false,
"issym": false,
"linkpath": "pax/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/longname",
"mode": "rw-r--r--",
"mtime": 1041808783,
"path": "pax/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/123/longlink",
"size": 0,
"uid": 1000
},
{
"gid": 100,
"isblk": false,
"ischr": false,
"isdev": false,
"isdir": false,
"isfifo": false,
"islnk": false,
"isreg": true,
"issym": false,
"linkpath": null,
"mode": "rw-r--r--",
"mtime": 1041808783,
"path": "pax/umlauts-\u00c4\u00d6\u00dc\u00e4\u00f6\u00fc\u00df",
"size": 7011,
"uid": 1000
},
{
"gid": 100,
"isblk": false,
"ischr": false,
"isdev": false,
"isdir": false,
"isfifo": false,
"islnk": false,
"isreg": true,
"issym": false,
"linkpath": null,
"mode": "rw-r--r--",
"mtime": 1041808783,
"path": "pax/regtype1",
"size": 7011,
"uid": 1000
},
{
"gid": 100,
"isblk": false,
"ischr": false,
"isdev": false,
"isdir": false,
"isfifo": false,
"islnk": false,
"isreg": true,
"issym": false,
"linkpath": null,
"mode": "rw-r--r--",
"mtime": 1041808783,
"path": "pax/regtype2",
"size": 7011,
"uid": 1000
},
{
"gid": 100,
"isblk": false,
"ischr": false,
"isdev": false,
"isdir": false,
"isfifo": false,
"islnk": false,
"isreg": true,
"issym": false,
"linkpath": null,
"mode": "rw-r--r--",
"mtime": 1041808783,
"path": "pax/regtype3",
"size": 7011,
"uid": 1000
},
{
"gid": 123,
"isblk": false,
"ischr": false,
"isdev": false,
"isdir": false,
"isfifo": false,
"islnk": false,
"isreg": true,
"issym": false,
"linkpath": null,
"mode": "rw-r--r--",
"mtime": 1041808783,
"path": "pax/regtype4",
"size": 7011,
"uid": 123
},
{
"gid": 1000,
"isblk": false,
"ischr": false,
"isdev": false,
"isdir": false,
"isfifo": false,
"islnk": false,
"isreg": true,
"issym": false,
"linkpath": null,
"mode": "rw-r--r--",
"mtime": 1041808783,
"path": "pax/bad-pax-\udce4\udcf6\udcfc",
"size": 7011,
"uid": 1000
},
{
"gid": 0,
"isblk": false,
"ischr": false,
"isdev": false,
"isdir": false,
"isfifo": false,
"islnk": false,
"isreg": true,
"issym": false,
"linkpath": null,
"mode": "rw-r--r--",
"mtime": 1041808783,
"path": "pax/hdrcharset-\udce4\udcf6\udcfc",
"size": 7011,
"uid": 0
},
{
"gid": 100,
"isblk": false,
"ischr": false,
"isdev": false,
"isdir": false,
"isfifo": false,
"islnk": false,
"isreg": true,
"issym": false,
"linkpath": null,
"mode": "rw-r--r--",
"mtime": 1041808783,
"path": "misc/eof",
"size": 0,
"uid": 1000
}
]

View File

@ -0,0 +1,53 @@
[
{
"gid": 513,
"isblk": false,
"ischr": false,
"isdev": false,
"isdir": true,
"isfifo": false,
"islnk": false,
"isreg": false,
"issym": false,
"linkpath": null,
"mode": "rwx------",
"mtime": 1319027321,
"path": "2859/",
"size": 0,
"uid": 500
},
{
"gid": 513,
"isblk": false,
"ischr": false,
"isdev": false,
"isdir": false,
"isfifo": false,
"islnk": false,
"isreg": true,
"issym": false,
"linkpath": null,
"mode": "rwx------",
"mtime": 1319027194,
"path": "2859/Copy of h\u00e0nz\u00ec-somefile.txt",
"size": 0,
"uid": 500
},
{
"gid": 513,
"isblk": false,
"ischr": false,
"isdev": false,
"isdir": false,
"isfifo": false,
"islnk": false,
"isreg": true,
"issym": false,
"linkpath": null,
"mode": "rwx------",
"mtime": 1319027194,
"path": "2859/h\u00e0nz\u00ec?-somefile.txt ",
"size": 0,
"uid": 500
}
]

View File

@ -0,0 +1,36 @@
[
{
"gid": 1000,
"isblk": false,
"ischr": false,
"isdev": false,
"isdir": true,
"isfifo": false,
"islnk": false,
"isreg": false,
"issym": false,
"linkpath": null,
"mode": "rwxr-xr-x",
"mtime": 1268678396,
"path": "a/",
"size": 0,
"uid": 1000
},
{
"gid": 1000,
"isblk": false,
"ischr": false,
"isdev": false,
"isdir": false,
"isfifo": false,
"islnk": false,
"isreg": true,
"issym": false,
"linkpath": null,
"mode": "rw-r--r--",
"mtime": 1268678259,
"path": "a/gr\u00fcn.png",
"size": 362,
"uid": 1000
}
]

View File

@ -0,0 +1,36 @@
[
{
"gid": 0,
"isblk": false,
"ischr": false,
"isdev": false,
"isdir": true,
"isfifo": false,
"islnk": false,
"isreg": false,
"issym": false,
"linkpath": null,
"mode": "rwxrwxr-x",
"mtime": 1381752672,
"path": "a/",
"size": 0,
"uid": 0
},
{
"gid": 0,
"isblk": false,
"ischr": false,
"isdev": false,
"isdir": false,
"isfifo": false,
"islnk": false,
"isreg": true,
"issym": false,
"linkpath": null,
"mode": "rw-rw-r--",
"mtime": 1268681860,
"path": "a/gru\u0308n.png",
"size": 362,
"uid": 0
}
]

View File

@ -0,0 +1,3 @@
Test file from borrowed from
https://github.com/libarchive/libarchive/issues/459
http://libarchive.github.io/google-code/issue-350/comment-0/%ED%94%84%EB%A1%9C%EA%B7%B8%EB%9E%A8.zip

View File

@ -0,0 +1,36 @@
[
{
"gid": 502,
"isblk": false,
"ischr": false,
"isdev": false,
"isdir": false,
"isfifo": false,
"islnk": false,
"isreg": true,
"issym": false,
"linkpath": null,
"mode": "rw-rw-r--",
"mtime": 1390485689,
"path": "hello.txt",
"size": 14,
"uid": 502
},
{
"gid": 502,
"isblk": false,
"ischr": false,
"isdev": false,
"isdir": false,
"isfifo": false,
"islnk": false,
"isreg": true,
"issym": false,
"linkpath": null,
"mode": "rw-rw-r--",
"mtime": 1390485651,
"path": "\ud504\ub85c\uadf8\ub7a8.txt",
"size": 13,
"uid": 502
}
]

View File

@ -0,0 +1,127 @@
from copy import copy
from os import stat
from libarchive import (file_reader, file_writer, memory_reader, memory_writer)
import pytest
from . import treestat
# NOTE: zip does not support high resolution time data, but pax and others do
def check_atime_ctime(archive, tree, timefmt=int):
tree2 = copy(tree)
for entry in archive:
epath = str(entry).rstrip('/')
assert epath in tree2
estat = tree2.pop(epath)
assert entry.atime == timefmt(estat.st_atime)
assert entry.ctime == timefmt(estat.st_ctime)
def stat_dict(path):
# return the raw stat output, the tuple output only returns ints
return stat(path)
def time_check(time_tuple, timefmt):
seconds, nanos = time_tuple
maths = float(seconds) + float(nanos) / 1000000000.0
return timefmt(maths)
@pytest.mark.parametrize('archfmt,timefmt', [('zip', int), ('pax', float)])
def test_memory_atime_ctime(archfmt, timefmt):
# Collect information on what should be in the archive
tree = treestat('libarchive', stat_dict)
# Create an archive of our libarchive/ directory
buf = bytes(bytearray(1000000))
with memory_writer(buf, archfmt) as archive1:
archive1.add_files('libarchive/')
# Check the data
with memory_reader(buf) as archive2:
check_atime_ctime(archive2, tree, timefmt=timefmt)
@pytest.mark.parametrize('archfmt,timefmt', [('zip', int), ('pax', float)])
def test_file_atime_ctime(archfmt, timefmt, tmpdir):
archive_path = "{0}/test.{1}".format(tmpdir.strpath, archfmt)
# Collect information on what should be in the archive
tree = treestat('libarchive', stat_dict)
# Create an archive of our libarchive/ directory
with file_writer(archive_path, archfmt) as archive:
archive.add_files('libarchive/')
# Read the archive and check that the data is correct
with file_reader(archive_path) as archive:
check_atime_ctime(archive, tree, timefmt=timefmt)
@pytest.mark.parametrize('archfmt,timefmt', [('zip', int), ('pax', float)])
def test_memory_time_setters(archfmt, timefmt):
has_birthtime = archfmt != 'zip'
# Create an archive of our libarchive/ directory
buf = bytes(bytearray(1000000))
with memory_writer(buf, archfmt) as archive1:
archive1.add_files('libarchive/')
atimestamp = (1482144741, 495628118)
mtimestamp = (1482155417, 659017086)
ctimestamp = (1482145211, 536858081)
btimestamp = (1482144740, 495628118)
buf2 = bytes(bytearray(1000000))
with memory_reader(buf) as archive1:
with memory_writer(buf2, archfmt) as archive2:
for entry in archive1:
entry.set_atime(*atimestamp)
entry.set_mtime(*mtimestamp)
entry.set_ctime(*ctimestamp)
if has_birthtime:
entry.set_birthtime(*btimestamp)
archive2.add_entries([entry])
with memory_reader(buf2) as archive2:
for entry in archive2:
assert entry.atime == time_check(atimestamp, timefmt)
assert entry.mtime == time_check(mtimestamp, timefmt)
assert entry.ctime == time_check(ctimestamp, timefmt)
if has_birthtime:
assert entry.birthtime == time_check(btimestamp, timefmt)
@pytest.mark.parametrize('archfmt,timefmt', [('zip', int), ('pax', float)])
def test_file_time_setters(archfmt, timefmt, tmpdir):
has_birthtime = archfmt != 'zip'
# Create an archive of our libarchive/ directory
archive_path = tmpdir.join('/test.{0}'.format(archfmt)).strpath
archive2_path = tmpdir.join('/test2.{0}'.format(archfmt)).strpath
with file_writer(archive_path, archfmt) as archive1:
archive1.add_files('libarchive/')
atimestamp = (1482144741, 495628118)
mtimestamp = (1482155417, 659017086)
ctimestamp = (1482145211, 536858081)
btimestamp = (1482144740, 495628118)
with file_reader(archive_path) as archive1:
with file_writer(archive2_path, archfmt) as archive2:
for entry in archive1:
entry.set_atime(*atimestamp)
entry.set_mtime(*mtimestamp)
entry.set_ctime(*ctimestamp)
if has_birthtime:
entry.set_birthtime(*btimestamp)
archive2.add_entries([entry])
with file_reader(archive2_path) as archive2:
for entry in archive2:
assert entry.atime == time_check(atimestamp, timefmt)
assert entry.mtime == time_check(mtimestamp, timefmt)
assert entry.ctime == time_check(ctimestamp, timefmt)
if has_birthtime:
assert entry.birthtime == time_check(btimestamp, timefmt)

View File

@ -0,0 +1,24 @@
from libarchive import memory_reader, memory_writer
from . import check_archive, treestat
def test_convert():
# Collect information on what should be in the archive
tree = treestat('libarchive')
# Create an archive of our libarchive/ directory
buf = bytes(bytearray(1000000))
with memory_writer(buf, 'gnutar', 'xz') as archive1:
archive1.add_files('libarchive/')
# Convert the archive to another format
buf2 = bytes(bytearray(1000000))
with memory_reader(buf) as archive1:
with memory_writer(buf2, 'zip') as archive2:
archive2.add_entries(archive1)
# Check the data
with memory_reader(buf2) as archive2:
check_archive(archive2, tree)

View File

@ -0,0 +1,151 @@
# -*- coding: utf-8 -*-
from codecs import open
import json
import locale
from os import environ, stat
from os.path import join
import unicodedata
import pytest
from libarchive import memory_reader, memory_writer
from libarchive.entry import ArchiveEntry, ConsumedArchiveEntry, PassedArchiveEntry
from . import data_dir, get_entries, get_tarinfos
text_type = unicode if str is bytes else str # noqa: F821
locale.setlocale(locale.LC_ALL, '')
# needed for sane time stamp comparison
environ['TZ'] = 'UTC'
def test_entry_properties():
buf = bytes(bytearray(1000000))
with memory_writer(buf, 'gnutar') as archive:
archive.add_files('README.rst')
readme_stat = stat('README.rst')
with memory_reader(buf) as archive:
for entry in archive:
assert entry.uid == readme_stat.st_uid
assert entry.gid == readme_stat.st_gid
assert entry.mode == readme_stat.st_mode
assert not entry.isblk
assert not entry.ischr
assert not entry.isdir
assert not entry.isfifo
assert not entry.islnk
assert not entry.issym
assert not entry.linkpath
assert entry.linkpath == entry.linkname
assert entry.isreg
assert entry.isfile
assert not entry.issock
assert not entry.isdev
assert b'rw' in entry.strmode
assert entry.pathname == entry.path
assert entry.pathname == entry.name
def test_check_ArchiveEntry_against_TarInfo():
for name in ('special.tar', 'tar_relative.tar'):
path = join(data_dir, name)
tarinfos = list(get_tarinfos(path))
entries = list(get_entries(path))
for tarinfo, entry in zip(tarinfos, entries):
assert tarinfo == entry
assert len(tarinfos) == len(entries)
def test_check_archiveentry_using_python_testtar():
check_entries(join(data_dir, 'testtar.tar'))
def test_check_archiveentry_with_unicode_and_binary_entries_tar():
check_entries(join(data_dir, 'unicode.tar'))
def test_check_archiveentry_with_unicode_and_binary_entries_zip():
check_entries(join(data_dir, 'unicode.zip'))
def test_check_archiveentry_with_unicode_and_binary_entries_zip2():
check_entries(join(data_dir, 'unicode2.zip'), ignore='mode')
def test_check_archiveentry_with_unicode_entries_and_name_zip():
check_entries(join(data_dir, '\ud504\ub85c\uadf8\ub7a8.zip'))
def check_entries(test_file, regen=False, ignore=''):
ignore = ignore.split()
fixture_file = test_file + '.json'
if regen:
entries = list(get_entries(test_file))
with open(fixture_file, 'w', encoding='UTF-8') as ex:
json.dump(entries, ex, indent=2, sort_keys=True)
with open(fixture_file, encoding='UTF-8') as ex:
expected = json.load(ex)
actual = list(get_entries(test_file))
for e1, e2 in zip(actual, expected):
for key in ignore:
e1.pop(key)
e2.pop(key)
# Normalize all unicode (can vary depending on the system)
for d in (e1, e2):
for key in d:
if isinstance(d[key], text_type):
d[key] = unicodedata.normalize('NFC', d[key])
assert e1 == e2
def test_the_life_cycle_of_archive_entries():
"""Check that `get_blocks` only works on the current entry, and only once.
"""
# Create a test archive in memory
buf = bytes(bytearray(10_000_000))
with memory_writer(buf, 'gnutar') as archive:
archive.add_files(
'README.rst',
'libarchive/__init__.py',
'libarchive/entry.py',
)
# Read multiple entries of the test archive and check how the evolve
with memory_reader(buf) as archive:
archive_iter = iter(archive)
entry1 = next(archive_iter)
assert type(entry1) is ArchiveEntry
for block in entry1.get_blocks():
pass
assert type(entry1) is ConsumedArchiveEntry
with pytest.raises(TypeError):
entry1.get_blocks()
entry2 = next(archive_iter)
assert type(entry2) is ArchiveEntry
assert type(entry1) is PassedArchiveEntry
with pytest.raises(TypeError):
entry1.get_blocks()
entry3 = next(archive_iter)
assert type(entry3) is ArchiveEntry
assert type(entry2) is PassedArchiveEntry
assert type(entry1) is PassedArchiveEntry
def test_non_ASCII_encoding_of_file_metadata():
buf = bytes(bytearray(100_000))
file_name = 'README.rst'
encoded_file_name = 'README.rst'.encode('cp037')
with memory_writer(buf, 'ustar', header_codec='cp037') as archive:
archive.add_file(file_name)
with memory_reader(buf) as archive:
entry = next(iter(archive))
assert entry.pathname == encoded_file_name
with memory_reader(buf, header_codec='cp037') as archive:
entry = next(iter(archive))
assert entry.pathname == file_name

View File

@ -0,0 +1,40 @@
from errno import ENOENT
import pytest
from libarchive import ArchiveError, ffi, memory_writer
def test_add_files_nonexistent():
with memory_writer(bytes(bytearray(4096)), 'zip') as archive:
with pytest.raises(ArchiveError) as e:
archive.add_files('nonexistent')
assert e.value.msg
assert e.value.errno == ENOENT
assert e.value.retcode == -25
def test_check_int_logs_warnings(monkeypatch):
calls = []
monkeypatch.setattr(ffi.logger, 'warning', lambda *_: calls.append(1))
archive_p = ffi.write_new()
ffi.check_int(ffi.ARCHIVE_WARN, print, [archive_p])
assert calls == [1]
def test_check_null():
with pytest.raises(ArchiveError) as e:
ffi.check_null(None, print, [])
assert str(e)
def test_error_string_decoding(monkeypatch):
monkeypatch.setattr(ffi, 'error_string', lambda *_: None)
r = ffi._error_string(None)
assert r is None
monkeypatch.setattr(ffi, 'error_string', lambda *_: b'a')
r = ffi._error_string(None)
assert isinstance(r, type(''))
monkeypatch.setattr(ffi, 'error_string', lambda *_: '\xe9'.encode('utf8'))
r = ffi._error_string(None)
assert isinstance(r, bytes)

View File

@ -0,0 +1,183 @@
"""Test reading, writing and extracting archives."""
import io
import json
import libarchive
from libarchive.entry import format_time
from libarchive.extract import EXTRACT_OWNER, EXTRACT_PERM, EXTRACT_TIME
from libarchive.write import memory_writer
from unittest.mock import patch
import pytest
from . import check_archive, in_dir, treestat
def test_buffers(tmpdir):
# Collect information on what should be in the archive
tree = treestat('libarchive')
# Create an archive of our libarchive/ directory
buf = bytes(bytearray(1000000))
with libarchive.memory_writer(buf, 'gnutar', 'xz') as archive:
archive.add_files('libarchive/')
# Read the archive and check that the data is correct
with libarchive.memory_reader(buf) as archive:
check_archive(archive, tree)
assert archive.format_name == b'GNU tar format'
assert archive.filter_names == [b'xz']
# Extract the archive in tmpdir and check that the data is intact
with in_dir(tmpdir.strpath):
flags = EXTRACT_OWNER | EXTRACT_PERM | EXTRACT_TIME
libarchive.extract_memory(buf, flags)
tree2 = treestat('libarchive')
assert tree2 == tree
def test_fd(tmpdir):
archive_file = open(tmpdir.strpath+'/test.tar.bz2', 'w+b')
fd = archive_file.fileno()
# Collect information on what should be in the archive
tree = treestat('libarchive')
# Create an archive of our libarchive/ directory
with libarchive.fd_writer(fd, 'gnutar', 'bzip2') as archive:
archive.add_files('libarchive/')
# Read the archive and check that the data is correct
archive_file.seek(0)
with libarchive.fd_reader(fd) as archive:
check_archive(archive, tree)
assert archive.format_name == b'GNU tar format'
assert archive.filter_names == [b'bzip2']
# Extract the archive in tmpdir and check that the data is intact
archive_file.seek(0)
with in_dir(tmpdir.strpath):
flags = EXTRACT_OWNER | EXTRACT_PERM | EXTRACT_TIME
libarchive.extract_fd(fd, flags)
tree2 = treestat('libarchive')
assert tree2 == tree
def test_files(tmpdir):
archive_path = tmpdir.strpath+'/test.tar.gz'
# Collect information on what should be in the archive
tree = treestat('libarchive')
# Create an archive of our libarchive/ directory
with libarchive.file_writer(archive_path, 'ustar', 'gzip') as archive:
archive.add_files('libarchive/')
# Read the archive and check that the data is correct
with libarchive.file_reader(archive_path) as archive:
check_archive(archive, tree)
assert archive.format_name == b'POSIX ustar format'
assert archive.filter_names == [b'gzip']
# Extract the archive in tmpdir and check that the data is intact
with in_dir(tmpdir.strpath):
flags = EXTRACT_OWNER | EXTRACT_PERM | EXTRACT_TIME
libarchive.extract_file(archive_path, flags)
tree2 = treestat('libarchive')
assert tree2 == tree
def test_custom_writer_and_stream_reader():
# Collect information on what should be in the archive
tree = treestat('libarchive')
# Create an archive of our libarchive/ directory
stream = io.BytesIO()
with libarchive.custom_writer(stream.write, 'zip') as archive:
archive.add_files('libarchive/')
stream.seek(0)
# Read the archive and check that the data is correct
with libarchive.stream_reader(stream, 'zip') as archive:
check_archive(archive, tree)
assert archive.format_name == b'ZIP 2.0 (deflation)'
assert archive.filter_names == []
@patch('libarchive.ffi.write_fail')
def test_write_fail(write_fail_mock):
buf = bytes(bytearray(1000000))
try:
with memory_writer(buf, 'gnutar', 'xz') as archive:
archive.add_files('libarchive/')
raise TypeError
except TypeError:
pass
assert write_fail_mock.called
@patch('libarchive.ffi.write_fail')
def test_write_not_fail(write_fail_mock):
buf = bytes(bytearray(1000000))
with memory_writer(buf, 'gnutar', 'xz') as archive:
archive.add_files('libarchive/')
assert not write_fail_mock.called
def test_adding_nonexistent_file_to_archive():
stream = io.BytesIO()
with libarchive.custom_writer(stream.write, 'zip') as archive:
with pytest.raises(libarchive.ArchiveError):
archive.add_files('nonexistent')
archive.add_files('libarchive/')
@pytest.mark.parametrize(
'archfmt,data_bytes',
[('zip', b'content'),
('gnutar', b''),
('pax', json.dumps({'a': 1, 'b': 2, 'c': 3}).encode()),
('7zip', b'lorem\0ipsum')])
def test_adding_entry_from_memory(archfmt, data_bytes):
entry_path = 'testfile.data'
entry_data = data_bytes
entry_size = len(data_bytes)
blocks = []
archfmt = 'zip'
has_birthtime = archfmt != 'zip'
atime = (1482144741, 495628118)
mtime = (1482155417, 659017086)
ctime = (1482145211, 536858081)
btime = (1482144740, 495628118) if has_birthtime else None
def write_callback(data):
blocks.append(data[:])
return len(data)
with libarchive.custom_writer(write_callback, archfmt) as archive:
archive.add_file_from_memory(
entry_path, entry_size, entry_data,
atime=atime, mtime=mtime, ctime=ctime, birthtime=btime,
uid=1000, gid=1000,
)
buf = b''.join(blocks)
with libarchive.memory_reader(buf) as memory_archive:
for archive_entry in memory_archive:
expected = entry_data
actual = b''.join(archive_entry.get_blocks())
assert expected == actual
assert archive_entry.path == entry_path
assert archive_entry.atime in (atime[0], format_time(*atime))
assert archive_entry.mtime in (mtime[0], format_time(*mtime))
assert archive_entry.ctime in (ctime[0], format_time(*ctime))
if has_birthtime:
assert archive_entry.birthtime in (
btime[0], format_time(*btime)
)
assert archive_entry.uid == 1000
assert archive_entry.gid == 1000

View File

@ -0,0 +1,36 @@
"""Test security-related extraction flags."""
import pytest
import os
from libarchive import extract_file, file_reader
from libarchive.extract import (
EXTRACT_SECURE_NOABSOLUTEPATHS, EXTRACT_SECURE_NODOTDOT,
)
from libarchive.exception import ArchiveError
from . import data_dir
def run_test(flags):
archive_path = os.path.join(data_dir, 'flags.tar')
try:
extract_file(archive_path, 0)
with pytest.raises(ArchiveError):
extract_file(archive_path, flags)
finally:
with file_reader(archive_path) as archive:
for entry in archive:
if os.path.exists(entry.pathname):
os.remove(entry.pathname)
def test_extraction_is_secure_by_default():
run_test(None)
def test_explicit_no_dot_dot():
run_test(EXTRACT_SECURE_NODOTDOT)
def test_explicit_no_absolute_paths():
run_test(EXTRACT_SECURE_NOABSOLUTEPATHS)

View File

@ -0,0 +1,14 @@
[tox]
envlist=py38,py39,py310,py311
skipsdist=True
[testenv]
passenv = LIBARCHIVE
commands=
python -m pytest -Wd -vv --forked --cov libarchive --cov-report term-missing {toxinidir}/tests {posargs}
flake8 {toxinidir}
deps=
flake8
pytest
pytest-cov
pytest-forked

View File

@ -0,0 +1,45 @@
# Source: https://github.com/Changaco/version.py
from os.path import dirname, isdir, join
import re
from subprocess import CalledProcessError, check_output
PREFIX = ''
tag_re = re.compile(r'\btag: %s([0-9][^,]*)\b' % PREFIX)
version_re = re.compile('^Version: (.+)$', re.M)
def get_version():
# Return the version if it has been injected into the file by git-archive
version = tag_re.search('$Format:%D$')
if version:
return version.group(1)
d = dirname(__file__)
if isdir(join(d, '.git')):
# Get the version using "git describe".
cmd = 'git describe --tags --match %s[0-9]* --dirty' % PREFIX
try:
version = check_output(cmd.split()).decode().strip()[len(PREFIX):]
except CalledProcessError:
raise RuntimeError('Unable to get version number from git tags')
# PEP 440 compatibility
if '-' in version:
if version.endswith('-dirty'):
raise RuntimeError('The working tree is dirty')
version = '.post'.join(version.split('-')[:2])
else:
# Extract the version from the PKG-INFO file.
with open(join(d, 'PKG-INFO'), encoding='utf-8', errors='replace') as f:
version = version_re.search(f.read()).group(1)
return version
if __name__ == '__main__':
print(get_version())

View File

@ -0,0 +1,22 @@
name: CI
on:
push:
branches: [ main ]
pull_request:
branches: [ main ]
jobs:
build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Install dependencies
run: |
sudo apt-get -qq update
sudo apt-get -y -qq install python3-pkgconfig
sudo apt-get -y -qq install libblkid-dev libblkid1 python3-dev
- name: Run tests
run: sudo make test

View File

@ -0,0 +1,44 @@
name: "CodeQL"
on:
push:
branches: [ "main" ]
pull_request:
branches: [ "main" ]
jobs:
analyze:
name: Analyze
runs-on: ubuntu-22.04
permissions:
actions: read
contents: read
security-events: write
strategy:
fail-fast: false
matrix:
language: [ 'cpp', 'python' ]
steps:
- name: Checkout repository
uses: actions/checkout@v4
# Initializes the CodeQL tools for scanning.
- name: Initialize CodeQL
uses: github/codeql-action/init@v2
with:
languages: ${{ matrix.language }}
- name: Install build dependencies
run: |
sudo apt-get -qq update
sudo apt-get -y -qq install python3-pkgconfig
sudo apt-get -y -qq install libblkid-dev libblkid1 python3-dev
- name: Build
run: |
make
- name: Perform CodeQL Analysis
uses: github/codeql-action/analyze@v2
with:
category: "/language:${{matrix.language}}"

View File

@ -0,0 +1,6 @@
env/
build/
tests/__pycache__/*
tests/*.img

View File

@ -0,0 +1,504 @@
GNU LESSER GENERAL PUBLIC LICENSE
Version 2.1, February 1999
Copyright (C) 1991, 1999 Free Software Foundation, Inc.
51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
Everyone is permitted to copy and distribute verbatim copies
of this license document, but changing it is not allowed.
[This is the first released version of the Lesser GPL. It also counts
as the successor of the GNU Library Public License, version 2, hence
the version number 2.1.]
Preamble
The licenses for most software are designed to take away your
freedom to share and change it. By contrast, the GNU General Public
Licenses are intended to guarantee your freedom to share and change
free software--to make sure the software is free for all its users.
This license, the Lesser General Public License, applies to some
specially designated software packages--typically libraries--of the
Free Software Foundation and other authors who decide to use it. You
can use it too, but we suggest you first think carefully about whether
this license or the ordinary General Public License is the better
strategy to use in any particular case, based on the explanations below.
When we speak of free software, we are referring to freedom of use,
not price. Our General Public Licenses are designed to make sure that
you have the freedom to distribute copies of free software (and charge
for this service if you wish); that you receive source code or can get
it if you want it; that you can change the software and use pieces of
it in new free programs; and that you are informed that you can do
these things.
To protect your rights, we need to make restrictions that forbid
distributors to deny you these rights or to ask you to surrender these
rights. These restrictions translate to certain responsibilities for
you if you distribute copies of the library or if you modify it.
For example, if you distribute copies of the library, whether gratis
or for a fee, you must give the recipients all the rights that we gave
you. You must make sure that they, too, receive or can get the source
code. If you link other code with the library, you must provide
complete object files to the recipients, so that they can relink them
with the library after making changes to the library and recompiling
it. And you must show them these terms so they know their rights.
We protect your rights with a two-step method: (1) we copyright the
library, and (2) we offer you this license, which gives you legal
permission to copy, distribute and/or modify the library.
To protect each distributor, we want to make it very clear that
there is no warranty for the free library. Also, if the library is
modified by someone else and passed on, the recipients should know
that what they have is not the original version, so that the original
author's reputation will not be affected by problems that might be
introduced by others.
Finally, software patents pose a constant threat to the existence of
any free program. We wish to make sure that a company cannot
effectively restrict the users of a free program by obtaining a
restrictive license from a patent holder. Therefore, we insist that
any patent license obtained for a version of the library must be
consistent with the full freedom of use specified in this license.
Most GNU software, including some libraries, is covered by the
ordinary GNU General Public License. This license, the GNU Lesser
General Public License, applies to certain designated libraries, and
is quite different from the ordinary General Public License. We use
this license for certain libraries in order to permit linking those
libraries into non-free programs.
When a program is linked with a library, whether statically or using
a shared library, the combination of the two is legally speaking a
combined work, a derivative of the original library. The ordinary
General Public License therefore permits such linking only if the
entire combination fits its criteria of freedom. The Lesser General
Public License permits more lax criteria for linking other code with
the library.
We call this license the "Lesser" General Public License because it
does Less to protect the user's freedom than the ordinary General
Public License. It also provides other free software developers Less
of an advantage over competing non-free programs. These disadvantages
are the reason we use the ordinary General Public License for many
libraries. However, the Lesser license provides advantages in certain
special circumstances.
For example, on rare occasions, there may be a special need to
encourage the widest possible use of a certain library, so that it becomes
a de-facto standard. To achieve this, non-free programs must be
allowed to use the library. A more frequent case is that a free
library does the same job as widely used non-free libraries. In this
case, there is little to gain by limiting the free library to free
software only, so we use the Lesser General Public License.
In other cases, permission to use a particular library in non-free
programs enables a greater number of people to use a large body of
free software. For example, permission to use the GNU C Library in
non-free programs enables many more people to use the whole GNU
operating system, as well as its variant, the GNU/Linux operating
system.
Although the Lesser General Public License is Less protective of the
users' freedom, it does ensure that the user of a program that is
linked with the Library has the freedom and the wherewithal to run
that program using a modified version of the Library.
The precise terms and conditions for copying, distribution and
modification follow. Pay close attention to the difference between a
"work based on the library" and a "work that uses the library". The
former contains code derived from the library, whereas the latter must
be combined with the library in order to run.
GNU LESSER GENERAL PUBLIC LICENSE
TERMS AND CONDITIONS FOR COPYING, DISTRIBUTION AND MODIFICATION
0. This License Agreement applies to any software library or other
program which contains a notice placed by the copyright holder or
other authorized party saying it may be distributed under the terms of
this Lesser General Public License (also called "this License").
Each licensee is addressed as "you".
A "library" means a collection of software functions and/or data
prepared so as to be conveniently linked with application programs
(which use some of those functions and data) to form executables.
The "Library", below, refers to any such software library or work
which has been distributed under these terms. A "work based on the
Library" means either the Library or any derivative work under
copyright law: that is to say, a work containing the Library or a
portion of it, either verbatim or with modifications and/or translated
straightforwardly into another language. (Hereinafter, translation is
included without limitation in the term "modification".)
"Source code" for a work means the preferred form of the work for
making modifications to it. For a library, complete source code means
all the source code for all modules it contains, plus any associated
interface definition files, plus the scripts used to control compilation
and installation of the library.
Activities other than copying, distribution and modification are not
covered by this License; they are outside its scope. The act of
running a program using the Library is not restricted, and output from
such a program is covered only if its contents constitute a work based
on the Library (independent of the use of the Library in a tool for
writing it). Whether that is true depends on what the Library does
and what the program that uses the Library does.
1. You may copy and distribute verbatim copies of the Library's
complete source code as you receive it, in any medium, provided that
you conspicuously and appropriately publish on each copy an
appropriate copyright notice and disclaimer of warranty; keep intact
all the notices that refer to this License and to the absence of any
warranty; and distribute a copy of this License along with the
Library.
You may charge a fee for the physical act of transferring a copy,
and you may at your option offer warranty protection in exchange for a
fee.
2. You may modify your copy or copies of the Library or any portion
of it, thus forming a work based on the Library, and copy and
distribute such modifications or work under the terms of Section 1
above, provided that you also meet all of these conditions:
a) The modified work must itself be a software library.
b) You must cause the files modified to carry prominent notices
stating that you changed the files and the date of any change.
c) You must cause the whole of the work to be licensed at no
charge to all third parties under the terms of this License.
d) If a facility in the modified Library refers to a function or a
table of data to be supplied by an application program that uses
the facility, other than as an argument passed when the facility
is invoked, then you must make a good faith effort to ensure that,
in the event an application does not supply such function or
table, the facility still operates, and performs whatever part of
its purpose remains meaningful.
(For example, a function in a library to compute square roots has
a purpose that is entirely well-defined independent of the
application. Therefore, Subsection 2d requires that any
application-supplied function or table used by this function must
be optional: if the application does not supply it, the square
root function must still compute square roots.)
These requirements apply to the modified work as a whole. If
identifiable sections of that work are not derived from the Library,
and can be reasonably considered independent and separate works in
themselves, then this License, and its terms, do not apply to those
sections when you distribute them as separate works. But when you
distribute the same sections as part of a whole which is a work based
on the Library, the distribution of the whole must be on the terms of
this License, whose permissions for other licensees extend to the
entire whole, and thus to each and every part regardless of who wrote
it.
Thus, it is not the intent of this section to claim rights or contest
your rights to work written entirely by you; rather, the intent is to
exercise the right to control the distribution of derivative or
collective works based on the Library.
In addition, mere aggregation of another work not based on the Library
with the Library (or with a work based on the Library) on a volume of
a storage or distribution medium does not bring the other work under
the scope of this License.
3. You may opt to apply the terms of the ordinary GNU General Public
License instead of this License to a given copy of the Library. To do
this, you must alter all the notices that refer to this License, so
that they refer to the ordinary GNU General Public License, version 2,
instead of to this License. (If a newer version than version 2 of the
ordinary GNU General Public License has appeared, then you can specify
that version instead if you wish.) Do not make any other change in
these notices.
Once this change is made in a given copy, it is irreversible for
that copy, so the ordinary GNU General Public License applies to all
subsequent copies and derivative works made from that copy.
This option is useful when you wish to copy part of the code of
the Library into a program that is not a library.
4. You may copy and distribute the Library (or a portion or
derivative of it, under Section 2) in object code or executable form
under the terms of Sections 1 and 2 above provided that you accompany
it with the complete corresponding machine-readable source code, which
must be distributed under the terms of Sections 1 and 2 above on a
medium customarily used for software interchange.
If distribution of object code is made by offering access to copy
from a designated place, then offering equivalent access to copy the
source code from the same place satisfies the requirement to
distribute the source code, even though third parties are not
compelled to copy the source along with the object code.
5. A program that contains no derivative of any portion of the
Library, but is designed to work with the Library by being compiled or
linked with it, is called a "work that uses the Library". Such a
work, in isolation, is not a derivative work of the Library, and
therefore falls outside the scope of this License.
However, linking a "work that uses the Library" with the Library
creates an executable that is a derivative of the Library (because it
contains portions of the Library), rather than a "work that uses the
library". The executable is therefore covered by this License.
Section 6 states terms for distribution of such executables.
When a "work that uses the Library" uses material from a header file
that is part of the Library, the object code for the work may be a
derivative work of the Library even though the source code is not.
Whether this is true is especially significant if the work can be
linked without the Library, or if the work is itself a library. The
threshold for this to be true is not precisely defined by law.
If such an object file uses only numerical parameters, data
structure layouts and accessors, and small macros and small inline
functions (ten lines or less in length), then the use of the object
file is unrestricted, regardless of whether it is legally a derivative
work. (Executables containing this object code plus portions of the
Library will still fall under Section 6.)
Otherwise, if the work is a derivative of the Library, you may
distribute the object code for the work under the terms of Section 6.
Any executables containing that work also fall under Section 6,
whether or not they are linked directly with the Library itself.
6. As an exception to the Sections above, you may also combine or
link a "work that uses the Library" with the Library to produce a
work containing portions of the Library, and distribute that work
under terms of your choice, provided that the terms permit
modification of the work for the customer's own use and reverse
engineering for debugging such modifications.
You must give prominent notice with each copy of the work that the
Library is used in it and that the Library and its use are covered by
this License. You must supply a copy of this License. If the work
during execution displays copyright notices, you must include the
copyright notice for the Library among them, as well as a reference
directing the user to the copy of this License. Also, you must do one
of these things:
a) Accompany the work with the complete corresponding
machine-readable source code for the Library including whatever
changes were used in the work (which must be distributed under
Sections 1 and 2 above); and, if the work is an executable linked
with the Library, with the complete machine-readable "work that
uses the Library", as object code and/or source code, so that the
user can modify the Library and then relink to produce a modified
executable containing the modified Library. (It is understood
that the user who changes the contents of definitions files in the
Library will not necessarily be able to recompile the application
to use the modified definitions.)
b) Use a suitable shared library mechanism for linking with the
Library. A suitable mechanism is one that (1) uses at run time a
copy of the library already present on the user's computer system,
rather than copying library functions into the executable, and (2)
will operate properly with a modified version of the library, if
the user installs one, as long as the modified version is
interface-compatible with the version that the work was made with.
c) Accompany the work with a written offer, valid for at
least three years, to give the same user the materials
specified in Subsection 6a, above, for a charge no more
than the cost of performing this distribution.
d) If distribution of the work is made by offering access to copy
from a designated place, offer equivalent access to copy the above
specified materials from the same place.
e) Verify that the user has already received a copy of these
materials or that you have already sent this user a copy.
For an executable, the required form of the "work that uses the
Library" must include any data and utility programs needed for
reproducing the executable from it. However, as a special exception,
the materials to be distributed need not include anything that is
normally distributed (in either source or binary form) with the major
components (compiler, kernel, and so on) of the operating system on
which the executable runs, unless that component itself accompanies
the executable.
It may happen that this requirement contradicts the license
restrictions of other proprietary libraries that do not normally
accompany the operating system. Such a contradiction means you cannot
use both them and the Library together in an executable that you
distribute.
7. You may place library facilities that are a work based on the
Library side-by-side in a single library together with other library
facilities not covered by this License, and distribute such a combined
library, provided that the separate distribution of the work based on
the Library and of the other library facilities is otherwise
permitted, and provided that you do these two things:
a) Accompany the combined library with a copy of the same work
based on the Library, uncombined with any other library
facilities. This must be distributed under the terms of the
Sections above.
b) Give prominent notice with the combined library of the fact
that part of it is a work based on the Library, and explaining
where to find the accompanying uncombined form of the same work.
8. You may not copy, modify, sublicense, link with, or distribute
the Library except as expressly provided under this License. Any
attempt otherwise to copy, modify, sublicense, link with, or
distribute the Library is void, and will automatically terminate your
rights under this License. However, parties who have received copies,
or rights, from you under this License will not have their licenses
terminated so long as such parties remain in full compliance.
9. You are not required to accept this License, since you have not
signed it. However, nothing else grants you permission to modify or
distribute the Library or its derivative works. These actions are
prohibited by law if you do not accept this License. Therefore, by
modifying or distributing the Library (or any work based on the
Library), you indicate your acceptance of this License to do so, and
all its terms and conditions for copying, distributing or modifying
the Library or works based on it.
10. Each time you redistribute the Library (or any work based on the
Library), the recipient automatically receives a license from the
original licensor to copy, distribute, link with or modify the Library
subject to these terms and conditions. You may not impose any further
restrictions on the recipients' exercise of the rights granted herein.
You are not responsible for enforcing compliance by third parties with
this License.
11. If, as a consequence of a court judgment or allegation of patent
infringement or for any other reason (not limited to patent issues),
conditions are imposed on you (whether by court order, agreement or
otherwise) that contradict the conditions of this License, they do not
excuse you from the conditions of this License. If you cannot
distribute so as to satisfy simultaneously your obligations under this
License and any other pertinent obligations, then as a consequence you
may not distribute the Library at all. For example, if a patent
license would not permit royalty-free redistribution of the Library by
all those who receive copies directly or indirectly through you, then
the only way you could satisfy both it and this License would be to
refrain entirely from distribution of the Library.
If any portion of this section is held invalid or unenforceable under any
particular circumstance, the balance of the section is intended to apply,
and the section as a whole is intended to apply in other circumstances.
It is not the purpose of this section to induce you to infringe any
patents or other property right claims or to contest validity of any
such claims; this section has the sole purpose of protecting the
integrity of the free software distribution system which is
implemented by public license practices. Many people have made
generous contributions to the wide range of software distributed
through that system in reliance on consistent application of that
system; it is up to the author/donor to decide if he or she is willing
to distribute software through any other system and a licensee cannot
impose that choice.
This section is intended to make thoroughly clear what is believed to
be a consequence of the rest of this License.
12. If the distribution and/or use of the Library is restricted in
certain countries either by patents or by copyrighted interfaces, the
original copyright holder who places the Library under this License may add
an explicit geographical distribution limitation excluding those countries,
so that distribution is permitted only in or among countries not thus
excluded. In such case, this License incorporates the limitation as if
written in the body of this License.
13. The Free Software Foundation may publish revised and/or new
versions of the Lesser General Public License from time to time.
Such new versions will be similar in spirit to the present version,
but may differ in detail to address new problems or concerns.
Each version is given a distinguishing version number. If the Library
specifies a version number of this License which applies to it and
"any later version", you have the option of following the terms and
conditions either of that version or of any later version published by
the Free Software Foundation. If the Library does not specify a
license version number, you may choose any version ever published by
the Free Software Foundation.
14. If you wish to incorporate parts of the Library into other free
programs whose distribution conditions are incompatible with these,
write to the author to ask for permission. For software which is
copyrighted by the Free Software Foundation, write to the Free
Software Foundation; we sometimes make exceptions for this. Our
decision will be guided by the two goals of preserving the free status
of all derivatives of our free software and of promoting the sharing
and reuse of software generally.
NO WARRANTY
15. BECAUSE THE LIBRARY IS LICENSED FREE OF CHARGE, THERE IS NO
WARRANTY FOR THE LIBRARY, TO THE EXTENT PERMITTED BY APPLICABLE LAW.
EXCEPT WHEN OTHERWISE STATED IN WRITING THE COPYRIGHT HOLDERS AND/OR
OTHER PARTIES PROVIDE THE LIBRARY "AS IS" WITHOUT WARRANTY OF ANY
KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE
IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
PURPOSE. THE ENTIRE RISK AS TO THE QUALITY AND PERFORMANCE OF THE
LIBRARY IS WITH YOU. SHOULD THE LIBRARY PROVE DEFECTIVE, YOU ASSUME
THE COST OF ALL NECESSARY SERVICING, REPAIR OR CORRECTION.
16. IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN
WRITING WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MAY MODIFY
AND/OR REDISTRIBUTE THE LIBRARY AS PERMITTED ABOVE, BE LIABLE TO YOU
FOR DAMAGES, INCLUDING ANY GENERAL, SPECIAL, INCIDENTAL OR
CONSEQUENTIAL DAMAGES ARISING OUT OF THE USE OR INABILITY TO USE THE
LIBRARY (INCLUDING BUT NOT LIMITED TO LOSS OF DATA OR DATA BEING
RENDERED INACCURATE OR LOSSES SUSTAINED BY YOU OR THIRD PARTIES OR A
FAILURE OF THE LIBRARY TO OPERATE WITH ANY OTHER SOFTWARE), EVEN IF
SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE POSSIBILITY OF SUCH
DAMAGES.
END OF TERMS AND CONDITIONS
How to Apply These Terms to Your New Libraries
If you develop a new library, and you want it to be of the greatest
possible use to the public, we recommend making it free software that
everyone can redistribute and change. You can do so by permitting
redistribution under these terms (or, alternatively, under the terms of the
ordinary General Public License).
To apply these terms, attach the following notices to the library. It is
safest to attach them to the start of each source file to most effectively
convey the exclusion of warranty; and each file should have at least the
"copyright" line and a pointer to where the full notice is found.
<one line to give the library's name and a brief idea of what it does.>
Copyright (C) <year> <name of author>
This library is free software; you can redistribute it and/or
modify it under the terms of the GNU Lesser General Public
License as published by the Free Software Foundation; either
version 2.1 of the License, or (at your option) any later version.
This library is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
Lesser General Public License for more details.
You should have received a copy of the GNU Lesser General Public
License along with this library; if not, write to the Free Software
Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301
USA
Also add information on how to contact you by electronic and paper mail.
You should also get your employer (if you work as a programmer) or your
school, if any, to sign a "copyright disclaimer" for the library, if
necessary. Here is a sample; alter the names:
Yoyodyne, Inc., hereby disclaims all copyright interest in the
library `Frob' (a library for tweaking knobs) written by James Random
Hacker.
<signature of Ty Coon>, 1 April 1990
Ty Coon, President of Vice
That's all there is to it!

View File

@ -0,0 +1,5 @@
include LICENSE README.md
include MANIFEST.in
include Makefile
recursive-include src *.h
recursive-include tests *.py

View File

@ -0,0 +1,35 @@
# Copyright (C) 2020 Red Hat, Inc.
#
# This library is free software; you can redistribute it and/or
# modify it under the terms of the GNU Lesser General Public
# License as published by the Free Software Foundation; either
# version 2.1 of the License, or (at your option) any later version.
#
# This library is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
# Lesser General Public License for more details.
#
# You should have received a copy of the GNU Lesser General Public
# License along with this library; if not, see <http://www.gnu.org/licenses/>.
PYTHON ?= python3
default: all
all:
@$(PYTHON) setup.py build
test: all
@env PYTHONPATH=$$(find $$(pwd) -name "*.so" | head -n 1 | xargs dirname):src \
$(PYTHON) -m unittest discover -v
run-ipython: all
@env PYTHONPATH=$$(find $$(pwd) -name "*.so" | head -n 1 | xargs dirname):src i$(PYTHON)
run-root-ipython: all
@sudo env PYTHONPATH=$$(find $$(pwd) -name "*.so" | head -n 1 | xargs dirname):src i$(PYTHON)
clean:
-rm -r build

View File

@ -0,0 +1,38 @@
# pylibblkid
[![PyPI version](https://badge.fury.io/py/pylibblkid.svg)](https://badge.fury.io/py/pylibblkid)
Python bindings for libblkid library.
## Usage examples
### Probing a device
```python
import blkid
pr = blkid.Probe()
pr.set_device("/dev/sda1")
pr.enable_superblocks(True)
pr.set_superblocks_flags(blkid.SUBLKS_TYPE | blkid.SUBLKS_USAGE | blkid.SUBLKS_UUID)
pr.do_safeprobe()
# print device properties as a dictionary
print(dict(pr))
```
### Searching for device with specified label
```python
import blkid
cache = blkid.Cache()
cache.probe_all()
dev = cache.find_device("LABEL", "mylabel")
# if found print found device and its properties
if dev:
print(dev.devname)
print(dev.tags)
```

View File

@ -0,0 +1,5 @@
opengnsys-pyblkid (0.3) UNRELEASED; urgency=medium
* Initial release.
-- root <opengnsys@opengnsys.es> Tue, 12 Nov 2024 14:18:39 +0000

View File

@ -0,0 +1,25 @@
Source: opengnsys-pyblkid
Maintainer: OpenGnsys <opengnsys@opengnsys.org>
Section: python
Priority: optional
Build-Depends: debhelper-compat (= 12),
dh-python,
libarchive-dev,
python3-all,
python3-mock,
python3-pytest,
python3-setuptools
Standards-Version: 4.5.0
Rules-Requires-Root: no
Homepage: https://github.com/vojtechtrefny/pyblkid
Vcs-Browser: https://github.com/vojtechtrefny/pyblkid
Vcs-Git: https://github.com/vojtechtrefny/pyblkid
Package: opengnsys-pyblkid
Architecture: all
Depends: ${lib:Depends}, ${misc:Depends}, ${python3:Depends}
Description: Python3 interface to pyblkid
Python bindings for libblkid library.
.
This package contains a Python3 interface to libarchive written using the
standard ctypes module to dynamically load and access the C library.

View File

@ -0,0 +1,2 @@
opengnsys-pyblkid_0.3_all.deb python optional
opengnsys-pyblkid_0.3_amd64.buildinfo python optional

View File

@ -0,0 +1,22 @@
#!/usr/bin/make -f
export LC_ALL=C.UTF-8
export PYBUILD_NAME = libarchive-c
#export PYBUILD_BEFORE_TEST = cp -av README.rst {build_dir}
export PYBUILD_TEST_ARGS = -vv -s
#export PYBUILD_AFTER_TEST = rm -v {build_dir}/README.rst
# ./usr/lib/python3/dist-packages/libarchive/
export PYBUILD_INSTALL_ARGS=--install-lib=/opt/opengnsys/python3/dist-packages/
%:
dh $@ --with python3 --buildsystem=pybuild
override_dh_gencontrol:
dh_gencontrol -- \
-Vlib:Depends=$(shell dpkg-query -W -f '$${Depends}' libarchive-dev \
| sed -E 's/.*(libarchive[[:alnum:].-]+).*/\1/')
override_dh_installdocs:
# Nothing, we don't want docs
override_dh_installchangelogs:
# Nothing, we don't want the changelog

View File

@ -0,0 +1 @@
3.0 (quilt)

View File

@ -0,0 +1,3 @@
version=3
https://pypi.python.org/simple/libarchive-c \
.*/libarchive-c-(.+)\.tar\.gz#.*

View File

@ -0,0 +1,3 @@
[build-system]
requires = ["setuptools", "pkgconfig"]
build-backend = "setuptools.build_meta"

View File

@ -0,0 +1,78 @@
# Copyright (C) 2020 Red Hat, Inc.
#
# This library is free software; you can redistribute it and/or
# modify it under the terms of the GNU Lesser General Public
# License as published by the Free Software Foundation; either
# version 2.1 of the License, or (at your option) any later version.
#
# This library is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
# Lesser General Public License for more details.
#
# You should have received a copy of the GNU Lesser General Public
# License along with this library; if not, see <http://www.gnu.org/licenses/>.
import sys
import pkgconfig
from setuptools import Extension, setup
pkgs = pkgconfig.list_all()
if "blkid" not in pkgs:
print("Please install libblkid-dev or libblkid-devel")
exit(1)
vers = sys.version_info
if f"python-{vers.major}.{vers.minor}" not in pkgs:
print("Please install python3-dev or python3-devel")
exit(1)
# define macros for blkid releases
macros = []
blkid_releases = ['2.24', '2.25', '2.30', '2.31', '2.36', '2.37', '2.39', '2.40']
for blkid_ver in blkid_releases:
if pkgconfig.installed("blkid", f">= {blkid_ver}"):
ver_list = blkid_ver.split('.')
full_release = '_'.join(ver_list)
macros.append((f"HAVE_BLKID_{full_release}", "1"))
if len(ver_list) > 2:
major_minor = '_'.join(ver_list[:2])
macros.append((f"HAVE_BLKID_{major_minor}", "1"))
with open("README.md", "r") as f:
long_description = f.read()
def main():
setup(name="pylibblkid",
version="0.3",
description="Python interface for the libblkid C library",
long_description=long_description,
long_description_content_type="text/markdown",
author="Vojtech Trefny",
author_email="vtrefny@redhat.com",
url="http://github.com/vojtechtrefny/pyblkid",
ext_modules=[Extension("blkid",
sources=["src/pyblkid.c",
"src/topology.c",
"src/partitions.c",
"src/cache.c",
"src/probe.c",],
include_dirs=["/usr/include"],
libraries=["blkid"],
library_dirs=["/usr/lib"],
define_macros=macros,
extra_compile_args=["-std=c99", "-Wall", "-Wextra", "-Werror"])],
classifiers=["Development Status :: 4 - Beta",
"Intended Audience :: Developers",
"License :: OSI Approved :: GNU Lesser General Public License v2 or later (LGPLv2+)",
"Programming Language :: C",
"Programming Language :: Python :: 3",
"Operating System :: POSIX :: Linux"])
if __name__ == "__main__":
main()

View File

@ -0,0 +1,336 @@
/*
* Copyright (C) 2020 Red Hat, Inc.
*
* This library is free software; you can redistribute it and/or
* modify it under the terms of the GNU Lesser General Public
* License as published by the Free Software Foundation; either
* version 2.1 of the License, or (at your option) any later version.
*
* This library is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
* Lesser General Public License for more details.
*
* You should have received a copy of the GNU Lesser General Public
* License along with this library; if not, see <http://www.gnu.org/licenses/>.
*
*/
#include "cache.h"
#include <blkid/blkid.h>
#include <stdbool.h>
#define UNUSED __attribute__((unused))
PyObject *Cache_new (PyTypeObject *type, PyObject *args UNUSED, PyObject *kwargs UNUSED) {
CacheObject *self = (CacheObject*) type->tp_alloc (type, 0);
if (self)
self->cache = NULL;
return (PyObject *) self;
}
int Cache_init (CacheObject *self UNUSED, PyObject *args, PyObject *kwargs) {
char *filename = NULL;
char *kwlist[] = { "filename", NULL };
int ret = 0;
if (!PyArg_ParseTupleAndKeywords (args, kwargs, "|s", kwlist, &filename)) {
return -1;
}
ret = blkid_get_cache (&(self->cache), filename);
if (ret < 0) {
PyErr_SetString (PyExc_RuntimeError, "Failed to get cache");
return -1;
}
return 0;
}
void Cache_dealloc (CacheObject *self) {
Py_TYPE (self)->tp_free ((PyObject *) self);
}
PyDoc_STRVAR(Cache_probe_all__doc__,
"probe_all (removable=False, new_only=False)\n\n"
"Probes all block devices.\n\n"
"With removable=True also adds removable block devices to cache. Don't forget that "
"removable devices could be pretty slow. It's very bad idea to call this function by default."
"With new_only=True this will scan only newly connected devices.");
static PyObject *Cache_probe_all (CacheObject *self, PyObject *args, PyObject *kwargs) {
bool removable = false;
bool new = false;
char *kwlist[] = { "removable", "new_only", NULL };
int ret = 0;
if (!PyArg_ParseTupleAndKeywords (args, kwargs, "|pp", kwlist, &removable, &new)) {
return NULL;
}
if (new) {
ret = blkid_probe_all_new (self->cache);
if (ret < 0) {
PyErr_SetString (PyExc_RuntimeError, "Failed to probe new devices");
return NULL;
}
} else {
ret = blkid_probe_all (self->cache);
if (ret < 0) {
PyErr_SetString (PyExc_RuntimeError, "Failed to probe block devices");
return NULL;
}
if (removable) {
ret = blkid_probe_all_removable (self->cache);
if (ret < 0) {
PyErr_SetString (PyExc_RuntimeError, "Failed to probe removable devices");
return NULL;
}
}
}
Py_RETURN_NONE;
}
PyDoc_STRVAR(Cache_gc__doc__,
"gc\n\n"
"Removes garbage (non-existing devices) from the cache.");
static PyObject *Cache_gc (CacheObject *self, PyObject *Py_UNUSED (ignored)) {
blkid_gc_cache (self->cache);
Py_RETURN_NONE;
}
PyDoc_STRVAR(Cache_get_device__doc__,
"get_device (name)\n\n"
"Get device from cache.\n\n");
static PyObject *Cache_get_device (CacheObject *self, PyObject *args, PyObject *kwargs) {
const char *name = NULL;
char *kwlist[] = { "name", NULL };
blkid_dev device = NULL;
DeviceObject *dev_obj = NULL;
if (!PyArg_ParseTupleAndKeywords (args, kwargs, "s", kwlist, &name))
return NULL;
device = blkid_get_dev (self->cache, name, BLKID_DEV_FIND);
if (device == NULL)
Py_RETURN_NONE;
dev_obj = PyObject_New (DeviceObject, &DeviceType);
if (!dev_obj) {
PyErr_SetString (PyExc_MemoryError, "Failed to create a new Device object");
return NULL;
}
dev_obj->device = device;
dev_obj->cache = self->cache;
return (PyObject *) dev_obj;
}
PyDoc_STRVAR(Cache_find_device__doc__,
"find_device (tag, value)\n\n"
"Returns a device which matches a particular tag/value pair.\n"
" If there is more than one device that matches the search specification, "
"it returns the one with the highest priority\n\n");
static PyObject *Cache_find_device (CacheObject *self, PyObject *args, PyObject *kwargs) {
const char *tag = NULL;
const char *value = NULL;
char *kwlist[] = { "tag", "value", NULL };
blkid_dev device = NULL;
DeviceObject *dev_obj = NULL;
if (!PyArg_ParseTupleAndKeywords (args, kwargs, "ss", kwlist, &tag, &value))
return NULL;
device = blkid_find_dev_with_tag (self->cache, tag, value);
if (device == NULL)
Py_RETURN_NONE;
dev_obj = PyObject_New (DeviceObject, &DeviceType);
if (!dev_obj) {
PyErr_SetString (PyExc_MemoryError, "Failed to create a new Device object");
return NULL;
}
dev_obj->device = device;
dev_obj->cache = self->cache;
return (PyObject *) dev_obj;
}
static PyMethodDef Cache_methods[] = {
{"probe_all", (PyCFunction)(void(*)(void)) Cache_probe_all, METH_VARARGS|METH_KEYWORDS, Cache_probe_all__doc__},
{"gc", (PyCFunction) Cache_gc, METH_NOARGS, Cache_gc__doc__},
{"get_device", (PyCFunction)(void(*)(void)) Cache_get_device, METH_VARARGS|METH_KEYWORDS, Cache_get_device__doc__},
{"find_device", (PyCFunction)(void(*)(void)) Cache_find_device, METH_VARARGS|METH_KEYWORDS, Cache_find_device__doc__},
{NULL, NULL, 0, NULL},
};
static PyObject *Cache_get_devices (CacheObject *self, PyObject *Py_UNUSED (ignored)) {
blkid_dev_iterate iter;
blkid_dev device = NULL;
DeviceObject *dev_obj = NULL;
PyObject *list = NULL;
list = PyList_New (0);
if (!list) {
PyErr_NoMemory ();
return NULL;
}
iter = blkid_dev_iterate_begin (self->cache);
while (blkid_dev_next (iter, &device) == 0) {
dev_obj = PyObject_New (DeviceObject, &DeviceType);
if (!dev_obj) {
PyErr_NoMemory ();
return NULL;
}
dev_obj->device = device;
dev_obj->cache = self->cache;
PyList_Append (list, (PyObject *) dev_obj);
Py_DECREF (dev_obj);
}
blkid_dev_iterate_end(iter);
return (PyObject *) list;
}
static PyGetSetDef Cache_getseters[] = {
{"devices", (getter) Cache_get_devices, NULL, "returns all devices in the cache", NULL},
{NULL, NULL, NULL, NULL, NULL}
};
PyTypeObject CacheType = {
PyVarObject_HEAD_INIT (NULL, 0)
.tp_name = "blkid.Cache",
.tp_basicsize = sizeof (CacheObject),
.tp_itemsize = 0,
.tp_flags = Py_TPFLAGS_DEFAULT,
.tp_new = Cache_new,
.tp_dealloc = (destructor) Cache_dealloc,
.tp_init = (initproc) Cache_init,
.tp_methods = Cache_methods,
.tp_getset = Cache_getseters,
};
/*********************** DEVICE ***********************/
PyObject *Device_new (PyTypeObject *type, PyObject *args UNUSED, PyObject *kwargs UNUSED) {
DeviceObject *self = (DeviceObject*) type->tp_alloc (type, 0);
if (self) {
self->device = NULL;
self->cache = NULL;
}
return (PyObject *) self;
}
int Device_init (DeviceObject *self UNUSED, PyObject *args UNUSED, PyObject *kwargs UNUSED) {
return 0;
}
void Device_dealloc (DeviceObject *self) {
Py_TYPE (self)->tp_free ((PyObject *) self);
}
PyDoc_STRVAR(Device_verify__doc__,
"verify\n\n"
"Verify that the data in device is consistent with what is on the actual"
"block device. Normally this will be called when finding items in the cache, "
"but for long running processes is also desirable to revalidate an item before use.");
static PyObject *Device_verify (DeviceObject *self, PyObject *Py_UNUSED (ignored)) {
self->device = blkid_verify (self->cache, self->device);
Py_RETURN_NONE;
}
static PyMethodDef Device_methods[] = {
{"verify", (PyCFunction) Device_verify, METH_NOARGS, Device_verify__doc__},
{NULL, NULL, 0, NULL},
};
static PyObject *Device_get_devname (DeviceObject *self, PyObject *Py_UNUSED (ignored)) {
const char *name = blkid_dev_devname (self->device);
if (!name)
Py_RETURN_NONE;
return PyUnicode_FromString (name);
}
static PyObject *Device_get_tags (DeviceObject *self, PyObject *Py_UNUSED (ignored)) {
blkid_tag_iterate iter;
const char *type = NULL;
const char *value = NULL;
PyObject *dict = NULL;
PyObject *py_value = NULL;
dict = PyDict_New ();
if (!dict) {
PyErr_NoMemory ();
return NULL;
}
iter = blkid_tag_iterate_begin (self->device);
while (blkid_tag_next (iter, &type, &value) == 0) {
py_value = PyUnicode_FromString (value);
if (py_value == NULL) {
Py_INCREF (Py_None);
py_value = Py_None;
}
PyDict_SetItemString (dict, type, py_value);
Py_DECREF (py_value);
}
blkid_tag_iterate_end(iter);
return (PyObject *) dict;
}
static PyObject *Device_str (PyObject *self) {
char *str = NULL;
int ret = 0;
PyObject *py_str = NULL;
intptr_t id = (intptr_t) self;
PyObject *py_name = PyObject_GetAttrString (self, "devname");
ret = asprintf (&str, "blkid.Device instance (0x%" PRIxPTR "): %s", id, PyUnicode_AsUTF8 (py_name));
Py_DECREF (py_name);
if (ret < 0)
Py_RETURN_NONE;
py_str = PyUnicode_FromString (str);
free (str);
return py_str;
}
static PyGetSetDef Device_getseters[] = {
{"devname", (getter) Device_get_devname, NULL, "returns the name previously used for Cache.get_device.", NULL},
{"tags", (getter) Device_get_tags, NULL, "returns all tags for this device.", NULL},
{NULL, NULL, NULL, NULL, NULL}
};
PyTypeObject DeviceType = {
PyVarObject_HEAD_INIT (NULL, 0)
.tp_name = "blkid.Device",
.tp_basicsize = sizeof (DeviceObject),
.tp_itemsize = 0,
.tp_flags = Py_TPFLAGS_DEFAULT,
.tp_new = Device_new,
.tp_dealloc = (destructor) Device_dealloc,
.tp_init = (initproc) Device_init,
.tp_methods = Device_methods,
.tp_getset = Device_getseters,
.tp_str = Device_str,
};

View File

@ -0,0 +1,44 @@
/*
* Copyright (C) 2020 Red Hat, Inc.
*
* This library is free software; you can redistribute it and/or
* modify it under the terms of the GNU Lesser General Public
* License as published by the Free Software Foundation; either
* version 2.1 of the License, or (at your option) any later version.
*
* This library is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
* Lesser General Public License for more details.
*
* You should have received a copy of the GNU Lesser General Public
* License along with this library; if not, see <http://www.gnu.org/licenses/>.
*
*/
#ifndef CACHE_H
#define CACHE_H
#include <Python.h>
#include <blkid/blkid.h>
typedef struct {
PyObject_HEAD
blkid_cache cache;
} CacheObject;
extern PyTypeObject CacheType;
PyObject *Cache_new (PyTypeObject *type, PyObject *args, PyObject *kwargs);
int Cache_init (CacheObject *self, PyObject *args, PyObject *kwargs);
void Cache_dealloc (CacheObject *self);
typedef struct {
PyObject_HEAD
blkid_dev device;
blkid_cache cache;
} DeviceObject;
extern PyTypeObject DeviceType;
#endif /* CACHE_H */

View File

@ -0,0 +1,534 @@
/*
* Copyright (C) 2020 Red Hat, Inc.
*
* This library is free software; you can redistribute it and/or
* modify it under the terms of the GNU Lesser General Public
* License as published by the Free Software Foundation; either
* version 2.1 of the License, or (at your option) any later version.
*
* This library is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
* Lesser General Public License for more details.
*
* You should have received a copy of the GNU Lesser General Public
* License along with this library; if not, see <http://www.gnu.org/licenses/>.
*
*/
#include "partitions.h"
#include <blkid/blkid.h>
#define UNUSED __attribute__((unused))
/*********************** PARTLIST ***********************/
PyObject *Partlist_new (PyTypeObject *type, PyObject *args UNUSED, PyObject *kwargs UNUSED) {
PartlistObject *self = (PartlistObject*) type->tp_alloc (type, 0);
if (self)
self->Parttable_object = NULL;
return (PyObject *) self;
}
int Partlist_init (PartlistObject *self UNUSED, PyObject *args UNUSED, PyObject *kwargs UNUSED) {
return 0;
}
void Partlist_dealloc (PartlistObject *self) {
if (self->Parttable_object)
Py_DECREF (self->Parttable_object);
Py_TYPE (self)->tp_free ((PyObject *) self);
}
PyObject *_Partlist_get_partlist_object (blkid_probe probe) {
PartlistObject *result = NULL;
blkid_partlist partlist = NULL;
if (!probe) {
PyErr_SetString (PyExc_RuntimeError, "internal error");
return NULL;
}
partlist = blkid_probe_get_partitions (probe);
if (!partlist) {
PyErr_SetString (PyExc_RuntimeError, "Failed to get partitions");
return NULL;
}
result = PyObject_New (PartlistObject, &PartlistType);
if (!result) {
PyErr_SetString (PyExc_MemoryError, "Failed to create a new Partlist object");
return NULL;
}
Py_INCREF (result);
result->partlist = partlist;
result->Parttable_object = NULL;
return (PyObject *) result;
}
PyDoc_STRVAR(Partlist_get_partition__doc__,
"get_partition (number)\n\n"
"Get partition by number.\n\n"
"It's possible that the list of partitions is *empty*, but there is a valid partition table on the disk.\n"
"This happen when on-disk details about partitions are unknown or the partition table is empty.");
static PyObject *Partlist_get_partition (PartlistObject *self, PyObject *args, PyObject *kwargs) {
char *kwlist[] = { "number", NULL };
int partnum = 0;
int numof = 0;
blkid_partition blkid_part = NULL;
PartitionObject *result = NULL;
if (!PyArg_ParseTupleAndKeywords (args, kwargs, "i", kwlist, &partnum)) {
return NULL;
}
numof = blkid_partlist_numof_partitions (self->partlist);
if (numof < 0) {
PyErr_SetString (PyExc_RuntimeError, "Failed to get number of partitions");
return NULL;
}
if (partnum > numof) {
PyErr_Format (PyExc_RuntimeError, "Cannot get partition %d, partition table has only %d partitions", partnum, numof);
return NULL;
}
blkid_part = blkid_partlist_get_partition (self->partlist, partnum);
if (!blkid_part) {
PyErr_Format (PyExc_RuntimeError, "Failed to get partition %d", partnum);
return NULL;
}
result = PyObject_New (PartitionObject, &PartitionType);
if (!result) {
PyErr_SetString (PyExc_MemoryError, "Failed to create a new Partition object");
return NULL;
}
result->number = partnum;
result->partition = blkid_part;
result->Parttable_object = NULL;
return (PyObject *) result;
}
#ifdef HAVE_BLKID_2_25
PyDoc_STRVAR(Partlist_get_partition_by_partno__doc__,
"get_partition_by_partno(number)\n\n"
"Get partition by partition number.\n\n"
"This does not assume any order of partitions and correctly handles \"out of order\" "
"partition tables. partition N is located after partition N+1 on the disk.");
static PyObject *Partlist_get_partition_by_partno (PartlistObject *self, PyObject *args, PyObject *kwargs) {
char *kwlist[] = { "number", NULL };
int partno = 0;
blkid_partition blkid_part = NULL;
PartitionObject *result = NULL;
if (!PyArg_ParseTupleAndKeywords (args, kwargs, "i", kwlist, &partno)) {
return NULL;
}
blkid_part = blkid_partlist_get_partition_by_partno (self->partlist, partno);
if (!blkid_part) {
PyErr_Format (PyExc_RuntimeError, "Failed to get partition %d", partno);
return NULL;
}
result = PyObject_New (PartitionObject, &PartitionType);
if (!result) {
PyErr_NoMemory ();
return NULL;
}
result->number = partno;
result->partition = blkid_part;
result->Parttable_object = NULL;
return (PyObject *) result;
}
#endif
static int _Py_Dev_Converter (PyObject *obj, void *p) {
#ifdef HAVE_LONG_LONG
*((dev_t *)p) = PyLong_AsUnsignedLongLong (obj);
#else
*((dev_t *)p) = PyLong_AsUnsignedLong (obj);
#endif
if (PyErr_Occurred ())
return 0;
return 1;
}
#ifdef HAVE_LONG_LONG
#define _PyLong_FromDev PyLong_FromLongLong
#else
#define _PyLong_FromDev PyLong_FromLong
#endif
PyDoc_STRVAR(Partlist_devno_to_partition__doc__,
"devno_to_partition (devno)\n\n"
"Get partition by devno.\n");
static PyObject *Partlist_devno_to_partition (PartlistObject *self, PyObject *args, PyObject *kwargs) {
dev_t devno = 0;
char *kwlist[] = { "devno", NULL };
blkid_partition blkid_part = NULL;
PartitionObject *result = NULL;
if (!PyArg_ParseTupleAndKeywords (args, kwargs, "O&:devno_to_devname", kwlist, _Py_Dev_Converter, &devno))
return NULL;
blkid_part = blkid_partlist_devno_to_partition (self->partlist, devno);
if (!blkid_part) {
PyErr_Format (PyExc_RuntimeError, "Failed to get partition %zu", devno);
return NULL;
}
result = PyObject_New (PartitionObject, &PartitionType);
if (!result) {
PyErr_NoMemory ();
return NULL;
}
result->number = blkid_partition_get_partno (blkid_part);
result->partition = blkid_part;
result->Parttable_object = NULL;
return (PyObject *) result;
}
static PyMethodDef Partlist_methods[] = {
{"get_partition", (PyCFunction)(void(*)(void)) Partlist_get_partition, METH_VARARGS|METH_KEYWORDS, Partlist_get_partition__doc__},
#ifdef HAVE_BLKID_2_25
{"get_partition_by_partno", (PyCFunction)(void(*)(void)) Partlist_get_partition_by_partno, METH_VARARGS|METH_KEYWORDS, Partlist_get_partition_by_partno__doc__},
#endif
{"devno_to_partition", (PyCFunction)(void(*)(void)) Partlist_devno_to_partition, METH_VARARGS|METH_KEYWORDS, Partlist_devno_to_partition__doc__},
{NULL, NULL, 0, NULL},
};
static PyObject *Partlist_get_table (PartlistObject *self, PyObject *Py_UNUSED (ignored)) {
if (self->Parttable_object) {
Py_INCREF (self->Parttable_object);
return self->Parttable_object;
}
self->Parttable_object = _Parttable_get_parttable_object (self->partlist);
return self->Parttable_object;
}
static PyObject *Partlist_get_numof_partitions (PartlistObject *self, PyObject *Py_UNUSED (ignored)) {
int ret = 0;
ret = blkid_partlist_numof_partitions (self->partlist);
if (ret < 0) {
PyErr_SetString (PyExc_MemoryError, "Failed to get number of partitions");
return NULL;
}
return PyLong_FromLong (ret);
}
static PyGetSetDef Partlist_getseters[] = {
{"table", (getter) Partlist_get_table, NULL, "binary interface for partition table on the device", NULL},
{"numof_partitions", (getter) Partlist_get_numof_partitions, NULL, "number of partitions in the list", NULL},
{NULL, NULL, NULL, NULL, NULL}
};
PyTypeObject PartlistType = {
PyVarObject_HEAD_INIT (NULL, 0)
.tp_name = "blkid.Partlist",
.tp_basicsize = sizeof (PartlistObject),
.tp_itemsize = 0,
.tp_flags = Py_TPFLAGS_DEFAULT,
.tp_new = Partlist_new,
.tp_dealloc = (destructor) Partlist_dealloc,
.tp_init = (initproc) Partlist_init,
.tp_methods = Partlist_methods,
.tp_getset = Partlist_getseters,
};
/*********************** PARTTABLE ***********************/
PyObject *Parttable_new (PyTypeObject *type, PyObject *args UNUSED, PyObject *kwargs UNUSED) {
ParttableObject *self = (ParttableObject*) type->tp_alloc (type, 0);
return (PyObject *) self;
}
int Parttable_init (ParttableObject *self UNUSED, PyObject *args UNUSED, PyObject *kwargs UNUSED) {
return 0;
}
void Parttable_dealloc (ParttableObject *self) {
Py_TYPE (self)->tp_free ((PyObject *) self);
}
PyObject *_Parttable_get_parttable_object (blkid_partlist partlist) {
ParttableObject *result = NULL;
blkid_parttable table = NULL;
if (!partlist) {
PyErr_SetString(PyExc_RuntimeError, "internal error");
return NULL;
}
table = blkid_partlist_get_table (partlist);
if (!table) {
PyErr_SetString (PyExc_RuntimeError, "Failed to get partitions");
return NULL;
}
result = PyObject_New (ParttableObject, &ParttableType);
if (!result) {
PyErr_SetString (PyExc_MemoryError, "Failed to create a new Parttable object");
return NULL;
}
Py_INCREF (result);
result->table = table;
return (PyObject *) result;
}
PyDoc_STRVAR(Parttable_get_parent__doc__,
"get_parent ()\n\n"
"Parent for nested partition tables.");
static PyObject *Parttable_get_parent (ParttableObject *self, PyObject *Py_UNUSED (ignored)) {
blkid_partition blkid_part = NULL;
PartitionObject *result = NULL;
blkid_part = blkid_parttable_get_parent (self->table);
if (!blkid_part)
Py_RETURN_NONE;
result = PyObject_New (PartitionObject, &PartitionType);
if (!result) {
PyErr_SetString (PyExc_MemoryError, "Failed to create a new Partition object");
return NULL;
}
result->number = 0;
result->partition = blkid_part;
return (PyObject *) result;
}
static PyMethodDef Parttable_methods[] = {
{"get_parent", (PyCFunction)(void(*)(void)) Parttable_get_parent, METH_NOARGS, Parttable_get_parent__doc__},
{NULL, NULL, 0, NULL},
};
static PyObject *Parrtable_get_type (ParttableObject *self, PyObject *Py_UNUSED (ignored)) {
const char *pttype = blkid_parttable_get_type (self->table);
return PyUnicode_FromString (pttype);
}
static PyObject *Parrtable_get_id (ParttableObject *self, PyObject *Py_UNUSED (ignored)) {
const char *ptid = blkid_parttable_get_id (self->table);
return PyUnicode_FromString (ptid);
}
static PyObject *Parrtable_get_offset (ParttableObject *self, PyObject *Py_UNUSED (ignored)) {
blkid_loff_t offset = blkid_parttable_get_offset (self->table);
return PyLong_FromLongLong (offset);
}
static PyGetSetDef Parttable_getseters[] = {
{"type", (getter) Parrtable_get_type, NULL, "partition table type (type name, e.g. 'dos', 'gpt', ...)", NULL},
{"id", (getter) Parrtable_get_id, NULL, "GPT disk UUID or DOS disk ID (in hex format)", NULL},
{"offset", (getter) Parrtable_get_offset, NULL, "position (in bytes) of the partition table", NULL},
{NULL, NULL, NULL, NULL, NULL}
};
PyTypeObject ParttableType = {
PyVarObject_HEAD_INIT (NULL, 0)
.tp_name = "blkid.Parttable",
.tp_basicsize = sizeof (ParttableObject),
.tp_itemsize = 0,
.tp_flags = Py_TPFLAGS_DEFAULT,
.tp_new = Parttable_new,
.tp_dealloc = (destructor) Parttable_dealloc,
.tp_init = (initproc) Parttable_init,
.tp_methods = Parttable_methods,
.tp_getset = Parttable_getseters,
};
/*********************** PARTITION ***********************/
PyObject *Partition_new (PyTypeObject *type, PyObject *args UNUSED, PyObject *kwargs UNUSED) {
PartitionObject *self = (PartitionObject*) type->tp_alloc (type, 0);
if (self)
self->Parttable_object = NULL;
return (PyObject *) self;
}
int Partition_init (PartitionObject *self, PyObject *args, PyObject *kwargs) {
char *kwlist[] = { "number", NULL };
if (!PyArg_ParseTupleAndKeywords (args, kwargs, "i", kwlist, &(self->number))) {
return -1;
}
self->partition = NULL;
return 0;
}
void Partition_dealloc (PartitionObject *self) {
if (self->Parttable_object)
Py_DECREF (self->Parttable_object);
Py_TYPE (self)->tp_free ((PyObject *) self);
}
static PyObject *Partition_get_type (PartitionObject *self, PyObject *Py_UNUSED (ignored)) {
int type = blkid_partition_get_type (self->partition);
return PyLong_FromLong (type);
}
static PyObject *Partition_get_type_string (PartitionObject *self, PyObject *Py_UNUSED (ignored)) {
const char *type = blkid_partition_get_type_string (self->partition);
return PyUnicode_FromString (type);
}
static PyObject *Partition_get_uuid (PartitionObject *self, PyObject *Py_UNUSED (ignored)) {
const char *uuid = blkid_partition_get_uuid (self->partition);
return PyUnicode_FromString (uuid);
}
static PyObject *Partition_get_is_extended (PartitionObject *self, PyObject *Py_UNUSED (ignored)) {
int extended = blkid_partition_is_extended (self->partition);
if (extended == 1)
Py_RETURN_TRUE;
else
Py_RETURN_FALSE;
}
static PyObject *Partition_get_is_logical (PartitionObject *self, PyObject *Py_UNUSED (ignored)) {
int logical = blkid_partition_is_logical (self->partition);
if (logical == 1)
Py_RETURN_TRUE;
else
Py_RETURN_FALSE;
}
static PyObject *Partition_get_is_primary (PartitionObject *self, PyObject *Py_UNUSED (ignored)) {
int primary = blkid_partition_is_primary (self->partition);
if (primary == 1)
Py_RETURN_TRUE;
else
Py_RETURN_FALSE;
}
static PyObject *Partition_get_name (PartitionObject *self, PyObject *Py_UNUSED (ignored)) {
const char *name = blkid_partition_get_name (self->partition);
return PyUnicode_FromString (name);
}
static PyObject *Partition_get_flags (PartitionObject *self, PyObject *Py_UNUSED (ignored)) {
unsigned long long flags = blkid_partition_get_flags (self->partition);
return PyLong_FromUnsignedLongLong (flags);
}
static PyObject *Partition_get_partno (PartitionObject *self, PyObject *Py_UNUSED (ignored)) {
int partno = blkid_partition_get_partno (self->partition);
return PyLong_FromLong (partno);
}
static PyObject *Partition_get_size (PartitionObject *self, PyObject *Py_UNUSED (ignored)) {
blkid_loff_t size = blkid_partition_get_size (self->partition);
return PyLong_FromLongLong (size);
}
static PyObject *Partition_get_start (PartitionObject *self, PyObject *Py_UNUSED (ignored)) {
blkid_loff_t start = blkid_partition_get_start (self->partition);
return PyLong_FromLongLong (start);
}
PyObject *_Partition_get_parttable_object (blkid_partition partition) {
ParttableObject *result = NULL;
blkid_parttable table = NULL;
if (!partition) {
PyErr_SetString(PyExc_RuntimeError, "internal error");
return NULL;
}
table = blkid_partition_get_table (partition);
if (!table) {
PyErr_SetString (PyExc_RuntimeError, "Failed to get partition table");
return NULL;
}
result = PyObject_New (ParttableObject, &ParttableType);
if (!result) {
PyErr_SetString (PyExc_MemoryError, "Failed to create a new Parttable object");
return NULL;
}
Py_INCREF (result);
result->table = table;
return (PyObject *) result;
}
static PyObject *Partition_get_table (PartitionObject *self, PyObject *Py_UNUSED (ignored)) {
if (self->Parttable_object) {
Py_INCREF (self->Parttable_object);
return self->Parttable_object;
}
self->Parttable_object = _Partition_get_parttable_object (self->partition);
return self->Parttable_object;
}
static PyGetSetDef Partition_getseters[] = {
{"type", (getter) Partition_get_type, NULL, "partition type", NULL},
{"type_string", (getter) Partition_get_type_string, NULL, "partition type string, note the type string is supported by a small subset of partition tables (e.g Mac and EFI GPT)", NULL},
{"uuid", (getter) Partition_get_uuid, NULL, "partition UUID string if supported by PT (e.g. GPT)", NULL},
{"is_extended", (getter) Partition_get_is_extended, NULL, "returns whether the partition is extendedor not ", NULL},
{"is_logical", (getter) Partition_get_is_logical, NULL, "returns whether the partition is logical or not", NULL},
{"is_primary", (getter) Partition_get_is_primary, NULL, "returns whether the partition is primary or not", NULL},
{"name", (getter) Partition_get_name, NULL, "partition name string if supported by PT (e.g. Mac)", NULL},
{"flags", (getter) Partition_get_flags, NULL, "partition flags (or attributes for gpt)", NULL},
{"partno", (getter) Partition_get_partno, NULL, "proposed partition number (e.g. 'N' from sda'N') or -1 in case of error", NULL},
{"size", (getter) Partition_get_size, NULL, "size of the partition (in 512-sectors)", NULL},
{"start", (getter) Partition_get_start, NULL, "start of the partition (in 512-sectors)", NULL},
{"table", (getter) Partition_get_table, NULL, "partition table object (usually the same for all partitions, except nested partition tables)", NULL},
{NULL, NULL, NULL, NULL, NULL}
};
PyTypeObject PartitionType = {
PyVarObject_HEAD_INIT (NULL, 0)
.tp_name = "blkid.Partition",
.tp_basicsize = sizeof (PartitionObject),
.tp_itemsize = 0,
.tp_flags = Py_TPFLAGS_DEFAULT,
.tp_new = Partition_new,
.tp_dealloc = (destructor) Partition_dealloc,
.tp_init = (initproc) Partition_init,
.tp_getset = Partition_getseters,
};

View File

@ -0,0 +1,68 @@
/*
* Copyright (C) 2020 Red Hat, Inc.
*
* This library is free software; you can redistribute it and/or
* modify it under the terms of the GNU Lesser General Public
* License as published by the Free Software Foundation; either
* version 2.1 of the License, or (at your option) any later version.
*
* This library is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
* Lesser General Public License for more details.
*
* You should have received a copy of the GNU Lesser General Public
* License along with this library; if not, see <http://www.gnu.org/licenses/>.
*
*/
#ifndef PARTITIONS_H
#define PARTITIONS_H
#include <Python.h>
#include <blkid/blkid.h>
typedef struct {
PyObject_HEAD
blkid_partlist partlist;
PyObject *Parttable_object;
} PartlistObject;
extern PyTypeObject PartlistType;
PyObject *Partlist_new (PyTypeObject *type, PyObject *args, PyObject *kwargs);
int Partlist_init (PartlistObject *self, PyObject *args, PyObject *kwargs);
void Partlist_dealloc (PartlistObject *self);
PyObject *_Partlist_get_partlist_object (blkid_probe probe);
typedef struct {
PyObject_HEAD
blkid_parttable table;
} ParttableObject;
extern PyTypeObject ParttableType;
PyObject *Parttable_new (PyTypeObject *type, PyObject *args, PyObject *kwargs);
int Parttable_init (ParttableObject *self, PyObject *args, PyObject *kwargs);
void Parttable_dealloc (ParttableObject *self);
PyObject *_Parttable_get_parttable_object (blkid_partlist partlist);
typedef struct {
PyObject_HEAD
int number;
blkid_partition partition;
PyObject *Parttable_object;
} PartitionObject;
extern PyTypeObject PartitionType;
PyObject *Partition_new (PyTypeObject *type, PyObject *args, PyObject *kwargs);
int Partition_init (PartitionObject *self, PyObject *args, PyObject *kwargs);
void Partition_dealloc (PartitionObject *self);
PyObject *_Partition_get_parttable_object (blkid_partition partition);
#endif /* PARTITIONS_H */

View File

@ -0,0 +1,959 @@
/*
* Copyright (C) 2020 Red Hat, Inc.
*
* This library is free software; you can redistribute it and/or
* modify it under the terms of the GNU Lesser General Public
* License as published by the Free Software Foundation; either
* version 2.1 of the License, or (at your option) any later version.
*
* This library is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
* Lesser General Public License for more details.
*
* You should have received a copy of the GNU Lesser General Public
* License along with this library; if not, see <http://www.gnu.org/licenses/>.
*
*/
#include "probe.h"
#include "topology.h"
#include "partitions.h"
#include <blkid/blkid.h>
#include <errno.h>
#include <fcntl.h>
#include <stdbool.h>
#define UNUSED __attribute__((unused))
PyObject *Probe_new (PyTypeObject *type, PyObject *args UNUSED, PyObject *kwargs UNUSED) {
ProbeObject *self = (ProbeObject*) type->tp_alloc (type, 0);
if (self) {
self->probe = NULL;
self->fd = -1;
self->topology = NULL;
self->partlist = NULL;
}
return (PyObject *) self;
}
int Probe_init (ProbeObject *self, PyObject *args UNUSED, PyObject *kwargs UNUSED) {
if (self->probe)
blkid_free_probe (self->probe);
self->probe = blkid_new_probe ();
if (!self->probe) {
PyErr_SetString (PyExc_MemoryError, "Failed to create new Probe.");
return -1;
}
return 0;
}
void Probe_dealloc (ProbeObject *self) {
if (!self->probe)
/* if init fails */
return;
if (self->fd > 0)
close (self->fd);
if (self->topology)
Py_DECREF (self->topology);
if (self->partlist)
Py_DECREF (self->partlist);
blkid_free_probe (self->probe);
Py_TYPE (self)->tp_free ((PyObject *) self);
}
PyDoc_STRVAR(Probe_set_device__doc__,
"set_device (device, flags=os.O_RDONLY|os.O_CLOEXEC, offset=0, size=0)\n\n"
"Assigns the device to probe control struct, resets internal buffers and resets the current probing.\n\n"
"'flags' define flags for the 'open' system call. By default the device will be opened as read-only.\n"
"'offset' and 'size' specify begin and size of probing area (zero means whole device/file)");
static PyObject *Probe_set_device (ProbeObject *self, PyObject *args, PyObject *kwargs) {
int ret = 0;
char *kwlist[] = { "device", "flags", "offset", "size", NULL };
char *device = NULL;
blkid_loff_t offset = 0;
blkid_loff_t size = 0;
int flags = O_RDONLY|O_CLOEXEC;
if (!PyArg_ParseTupleAndKeywords (args, kwargs, "s|iKK", kwlist, &device, &flags, &offset, &size)) {
return NULL;
}
self->fd = open (device, flags);
if (self->fd == -1) {
PyErr_Format (PyExc_OSError, "Failed to open device '%s': %s", device, strerror (errno));
return NULL;
}
ret = blkid_probe_set_device (self->probe, self->fd, offset, size);
if (ret != 0) {
PyErr_SetString (PyExc_RuntimeError, "Failed to set device");
return NULL;
}
Py_RETURN_NONE;
}
PyDoc_STRVAR(Probe_enable_superblocks__doc__,
"enable_superblocks (enable)\n\n" \
"Enables/disables the superblocks probing for non-binary interface.");
static PyObject *Probe_enable_superblocks (ProbeObject *self, PyObject *args, PyObject *kwargs) {
int ret = 0;
bool enable = false;
char *kwlist[] = { "enable", NULL };
if (!PyArg_ParseTupleAndKeywords(args, kwargs, "p", kwlist, &enable)) {
return NULL;
}
ret = blkid_probe_enable_superblocks (self->probe, enable);
if (ret != 0) {
PyErr_Format (PyExc_RuntimeError, "Failed to %s superblocks probing", enable ? "enable" : "disable");
return NULL;
}
Py_RETURN_NONE;
}
PyDoc_STRVAR(Probe_set_superblocks_flags__doc__,
"set_superblocks_flags (flags)\n\n" \
"Sets probing flags to the superblocks prober. This function is optional, the default are blkid.SUBLKS_DEFAULTS flags.\n"
"Use blkid.SUBLKS_* constants for the 'flags' argument.");
static PyObject *Probe_set_superblocks_flags (ProbeObject *self, PyObject *args, PyObject *kwargs) {
int ret = 0;
int flags = 0;
char *kwlist[] = { "flags", NULL };
if (!PyArg_ParseTupleAndKeywords(args, kwargs, "i", kwlist, &flags)) {
return NULL;
}
ret = blkid_probe_set_superblocks_flags (self->probe, flags);
if (ret != 0) {
PyErr_SetString (PyExc_RuntimeError, "Failed to set partition flags");
return NULL;
}
Py_RETURN_NONE;
}
PyDoc_STRVAR(Probe_filter_superblocks_type__doc__,
"filter_superblocks_type (flag, names)\n\n" \
"Filter superblocks prober results based on type.\n"
"blkid.FLTR_NOTIN - probe for all items which are NOT IN names\n"
"blkid.FLTR_ONLYIN - probe for items which are IN names\n"
"names: array of probing function names (e.g. 'vfat').");
static PyObject *Probe_filter_superblocks_type (ProbeObject *self, PyObject *args, PyObject *kwargs) {
int ret = 0;
int flag = 0;
PyObject *pynames = NULL;
PyObject *pystring = NULL;
Py_ssize_t len = 0;
char **names = NULL;
char *kwlist[] = { "flag", "names", NULL };
if (!PyArg_ParseTupleAndKeywords(args, kwargs, "iO", kwlist, &flag, &pynames)) {
return NULL;
}
if (!PySequence_Check (pynames)) {
PyErr_SetString (PyExc_AttributeError, "Failed to parse list of names for filter");
return NULL;
}
len = PySequence_Size (pynames);
if (len < 1) {
PyErr_SetString (PyExc_AttributeError, "Failed to parse list of names for filter");
return NULL;
}
names = malloc(sizeof (char *) * (len + 1));
if (!names) {
PyErr_NoMemory ();
return NULL;
}
for (Py_ssize_t i = 0; i < len; i++) {
pystring = PyUnicode_AsEncodedString (PySequence_GetItem (pynames, i), "utf-8", "replace");
names[i] = strdup (PyBytes_AsString (pystring));
Py_DECREF (pystring);
}
names[len] = NULL;
ret = blkid_probe_filter_superblocks_type (self->probe, flag, names);
if (ret != 0) {
PyErr_SetString (PyExc_RuntimeError, "Failed to set probe filter");
for (Py_ssize_t i = 0; i < len; i++)
free(names[i]);
free (names);
return NULL;
}
for (Py_ssize_t i = 0; i < len; i++)
free(names[i]);
free (names);
Py_RETURN_NONE;
}
PyDoc_STRVAR(Probe_filter_superblocks_usage__doc__,
"filter_superblocks_usage (flag, usage)\n\n" \
"Filter superblocks prober results based on usage.\n"
"blkid.FLTR_NOTIN - probe for all items which are NOT IN names\n"
"blkid.FLTR_ONLYIN - probe for items which are IN names\n"
"usage: blkid.USAGE_* flags");
static PyObject *Probe_filter_superblocks_usage (ProbeObject *self, PyObject *args, PyObject *kwargs) {
int ret = 0;
int flag = 0;
int usage = 0;
char *kwlist[] = { "flag", "usage", NULL };
if (!PyArg_ParseTupleAndKeywords(args, kwargs, "ii", kwlist, &flag, &usage)) {
return NULL;
}
ret = blkid_probe_filter_superblocks_usage (self->probe, flag, usage);
if (ret != 0) {
PyErr_SetString (PyExc_RuntimeError, "Failed to set probe filter");
return NULL;
}
Py_RETURN_NONE;
}
PyDoc_STRVAR(Probe_invert_superblocks_filter__doc__,
"invert_superblocks_filter ()\n\n"
"This function inverts superblocks probing filter.\n");
static PyObject *Probe_invert_superblocks_filter (ProbeObject *self, PyObject *Py_UNUSED (ignored)) {
int ret = 0;
ret = blkid_probe_invert_superblocks_filter (self->probe);
if (ret != 0) {
PyErr_SetString (PyExc_RuntimeError, "Failed to invert superblock probing filter");
return NULL;
}
Py_RETURN_NONE;
}
PyDoc_STRVAR(Probe_reset_superblocks_filter__doc__,
"reset_superblocks_filter ()\n\n"
"This function resets superblocks probing filter.\n");
static PyObject *Probe_reset_superblocks_filter (ProbeObject *self, PyObject *Py_UNUSED (ignored)) {
int ret = 0;
ret = blkid_probe_reset_superblocks_filter (self->probe);
if (ret != 0) {
PyErr_SetString (PyExc_RuntimeError, "Failed to reset superblock probing filter");
return NULL;
}
Py_RETURN_NONE;
}
PyDoc_STRVAR(Probe_enable_partitions__doc__,
"enable_partitions (enable)\n\n" \
"Enables/disables the partitions probing for non-binary interface.");
static PyObject *Probe_enable_partitions (ProbeObject *self, PyObject *args, PyObject *kwargs) {
int ret = 0;
bool enable = false;
char *kwlist[] = { "enable", NULL };
if (!PyArg_ParseTupleAndKeywords(args, kwargs, "p", kwlist, &enable)) {
return NULL;
}
ret = blkid_probe_enable_partitions (self->probe, enable);
if (ret != 0) {
PyErr_Format (PyExc_RuntimeError, "Failed to %s partitions probing", enable ? "enable" : "disable");
return NULL;
}
Py_RETURN_NONE;
}
PyDoc_STRVAR(Probe_set_partitions_flags__doc__,
"set_partitions_flags (flags)\n\n" \
"Sets probing flags to the partitions prober. This function is optional.\n"
"Use blkid.PARTS_* constants for the 'flags' argument.");
static PyObject *Probe_set_partitions_flags (ProbeObject *self, PyObject *args, PyObject *kwargs) {
int ret = 0;
int flags = 0;
char *kwlist[] = { "flags", NULL };
if (!PyArg_ParseTupleAndKeywords(args, kwargs, "i", kwlist, &flags)) {
return NULL;
}
ret = blkid_probe_set_partitions_flags (self->probe, flags);
if (ret != 0) {
PyErr_SetString (PyExc_RuntimeError, "Failed to set superblock flags");
return NULL;
}
Py_RETURN_NONE;
}
PyDoc_STRVAR(Probe_filter_partitions_type__doc__,
"filter_partitions_type (flag, names)\n\n" \
"Filter partitions prober results based on type.\n"
"blkid.FLTR_NOTIN - probe for all items which are NOT IN names\n"
"blkid.FLTR_ONLYIN - probe for items which are IN names\n"
"names: array of probing function names (e.g. 'vfat').");
static PyObject *Probe_filter_partitions_type (ProbeObject *self, PyObject *args, PyObject *kwargs) {
int ret = 0;
int flag = 0;
PyObject *pynames = NULL;
PyObject *pystring = NULL;
Py_ssize_t len = 0;
char **names = NULL;
char *kwlist[] = { "flag", "names", NULL };
if (!PyArg_ParseTupleAndKeywords(args, kwargs, "iO", kwlist, &flag, &pynames)) {
return NULL;
}
if (!PySequence_Check (pynames)) {
PyErr_SetString (PyExc_AttributeError, "Failed to parse list of names for filter");
return NULL;
}
len = PySequence_Size (pynames);
if (len < 1) {
PyErr_SetString (PyExc_AttributeError, "Failed to parse list of names for filter");
return NULL;
}
names = malloc(sizeof (char *) * (len + 1));
if (!names) {
PyErr_NoMemory ();
return NULL;
}
for (Py_ssize_t i = 0; i < len; i++) {
pystring = PyUnicode_AsEncodedString (PySequence_GetItem (pynames, i), "utf-8", "replace");
names[i] = strdup (PyBytes_AsString (pystring));
Py_DECREF (pystring);
}
names[len] = NULL;
ret = blkid_probe_filter_partitions_type (self->probe, flag, names);
if (ret != 0) {
PyErr_SetString (PyExc_RuntimeError, "Failed to set probe filter");
for (Py_ssize_t i = 0; i < len; i++)
free(names[i]);
free (names);
return NULL;
}
for (Py_ssize_t i = 0; i < len; i++)
free(names[i]);
free (names);
Py_RETURN_NONE;
}
PyDoc_STRVAR(Probe_invert_partitions_filter__doc__,
"invert_partitions_filter ()\n\n"
"This function inverts partitions probing filter.\n");
static PyObject *Probe_invert_partitions_filter (ProbeObject *self, PyObject *Py_UNUSED (ignored)) {
int ret = 0;
ret = blkid_probe_invert_partitions_filter (self->probe);
if (ret != 0) {
PyErr_SetString (PyExc_RuntimeError, "Failed to invert superblock probing filter");
return NULL;
}
Py_RETURN_NONE;
}
PyDoc_STRVAR(Probe_reset_partitions_filter__doc__,
"reset_partitions_filter ()\n\n"
"This function resets partitions probing filter.\n");
static PyObject *Probe_reset_partitions_filter (ProbeObject *self, PyObject *Py_UNUSED (ignored)) {
int ret = 0;
ret = blkid_probe_reset_partitions_filter (self->probe);
if (ret != 0) {
PyErr_SetString (PyExc_RuntimeError, "Failed to reset superblock probing filter");
return NULL;
}
Py_RETURN_NONE;
}
PyDoc_STRVAR(Probe_enable_topology__doc__,
"enable_topology (enable)\n\n" \
"Enables/disables the topology probing for non-binary interface.");
static PyObject *Probe_enable_topology (ProbeObject *self, PyObject *args, PyObject *kwargs) {
int ret = 0;
bool enable = false;
char *kwlist[] = { "enable", NULL };
if (!PyArg_ParseTupleAndKeywords(args, kwargs, "p", kwlist, &enable)) {
return NULL;
}
ret = blkid_probe_enable_topology (self->probe, enable);
if (ret != 0) {
PyErr_Format (PyExc_RuntimeError, "Failed to %s topology probing", enable ? "enable" : "disable");
return NULL;
}
Py_RETURN_NONE;
}
PyDoc_STRVAR(Probe_lookup_value__doc__,
"lookup_value (name)\n\n" \
"Assigns the device to probe control struct, resets internal buffers and resets the current probing.");
static PyObject *Probe_lookup_value (ProbeObject *self, PyObject *args, PyObject *kwargs) {
int ret = 0;
char *kwlist[] = { "name", NULL };
char *name = NULL;
const char *value = NULL;
if (!PyArg_ParseTupleAndKeywords(args, kwargs, "s", kwlist, &name)) {
return NULL;
}
ret = blkid_probe_lookup_value (self->probe, name, &value, NULL);
if (ret != 0) {
PyErr_Format (PyExc_RuntimeError, "Failed to lookup '%s'", name);
return NULL;
}
return PyBytes_FromString (value);
}
PyDoc_STRVAR(Probe_do_safeprobe__doc__,
"do_safeprobe ()\n\n"
"This function gathers probing results from all enabled chains and checks for ambivalent results"
"(e.g. more filesystems on the device).\n"
"Returns True on success, False if nothing is detected.\n\n"
"Note about superblocks chain -- the function does not check for filesystems when a RAID signature is detected.\n"
"The function also does not check for collision between RAIDs. The first detected RAID is returned.\n"
"The function checks for collision between partition table and RAID signature -- it's recommended to "
"enable partitions chain together with superblocks chain.\n");
static PyObject *Probe_do_safeprobe (ProbeObject *self, PyObject *Py_UNUSED (ignored)) {
int ret = 0;
if (self->fd < 0) {
PyErr_SetString (PyExc_ValueError, "No device set");
return NULL;
}
if (self->topology) {
Py_DECREF (self->topology);
self->topology = NULL;
}
if (self->partlist) {
Py_DECREF (self->partlist);
self->partlist = NULL;
}
ret = blkid_do_safeprobe (self->probe);
if (ret < 0) {
PyErr_SetString (PyExc_RuntimeError, "Failed to safeprobe the device");
return NULL;
}
if (ret == 0)
Py_RETURN_TRUE;
else
Py_RETURN_FALSE;
}
PyDoc_STRVAR(Probe_do_fullprobe__doc__,
"do_fullprobe ()\n\n"
"Returns True on success, False if nothing is detected.\n"
"This function gathers probing results from all enabled chains. Same as do_safeprobe() but "
"does not check for collision between probing result.");
static PyObject *Probe_do_fullprobe (ProbeObject *self, PyObject *Py_UNUSED (ignored)) {
int ret = 0;
if (self->fd < 0) {
PyErr_SetString (PyExc_ValueError, "No device set");
return NULL;
}
if (self->topology) {
Py_DECREF (self->topology);
self->topology = NULL;
}
if (self->partlist) {
Py_DECREF (self->partlist);
self->partlist = NULL;
}
ret = blkid_do_fullprobe (self->probe);
if (ret < 0) {
PyErr_SetString (PyExc_RuntimeError, "Failed to fullprobe the device");
return NULL;
}
if (ret == 0)
Py_RETURN_TRUE;
else
Py_RETURN_FALSE;
}
PyDoc_STRVAR(Probe_do_probe__doc__,
"do_probe ()\n\n"
"Calls probing functions in all enabled chains. The superblocks chain is enabled by default.\n"
"Returns True on success, False if nothing is detected.\n\n"
"The do_probe() stores result from only one probing function. It's necessary to call this routine "
"in a loop to get results from all probing functions in all chains. The probing is reset by "
"reset_probe() or by filter functions.");
static PyObject *Probe_do_probe (ProbeObject *self, PyObject *Py_UNUSED (ignored)) {
int ret = 0;
if (self->fd < 0) {
PyErr_SetString (PyExc_ValueError, "No device set");
return NULL;
}
if (self->topology) {
Py_DECREF (self->topology);
self->topology = NULL;
}
if (self->partlist) {
Py_DECREF (self->partlist);
self->partlist = NULL;
}
ret = blkid_do_probe (self->probe);
if (ret < 0) {
PyErr_SetString (PyExc_RuntimeError, "Failed to probe the device");
return NULL;
}
if (ret == 0)
Py_RETURN_TRUE;
else
Py_RETURN_FALSE;
}
PyDoc_STRVAR(Probe_step_back__doc__,
"step_back ()\n\n"
"This function move pointer to the probing chain one step back -- it means that the previously "
"used probing function will be called again in the next Probe.do_probe() call.\n"
"This is necessary for example if you erase or modify on-disk superblock according to the "
"current libblkid probing result.\n"
"Note that Probe.hide_range() changes semantic of this function and cached buffers are "
"not reset, but library uses in-memory modified buffers to call the next probing function.");
static PyObject *Probe_step_back (ProbeObject *self, PyObject *Py_UNUSED (ignored)) {
int ret = 0;
ret = blkid_probe_step_back (self->probe);
if (ret < 0) {
PyErr_SetString (PyExc_RuntimeError, "Failed to step back the probe");
return NULL;
}
Py_RETURN_NONE;
}
#ifdef HAVE_BLKID_2_31
PyDoc_STRVAR(Probe_reset_buffers__doc__,
"reset_buffers ()\n\n"
"libblkid reuse all already read buffers from the device. The buffers may be modified by Probe.hide_range().\n"
"This function reset and free all cached buffers. The next Probe.do_probe() will read all data from the device.");
static PyObject *Probe_reset_buffers (ProbeObject *self, PyObject *Py_UNUSED (ignored)) {
int ret = 0;
ret = blkid_probe_reset_buffers (self->probe);
if (ret != 0) {
PyErr_SetString (PyExc_RuntimeError, "Failed to reset buffers");
return NULL;
}
Py_RETURN_NONE;
}
#endif
PyDoc_STRVAR(Probe_reset_probe__doc__,
"reset_probe ()\n\n"
"Zeroize probing results and resets the current probing (this has impact to do_probe() only).\n"
"This function does not touch probing filters and keeps assigned device.");
static PyObject *Probe_reset_probe (ProbeObject *self, PyObject *Py_UNUSED (ignored)) {
blkid_reset_probe (self->probe);
if (self->topology) {
Py_DECREF (self->topology);
self->topology = NULL;
}
if (self->partlist) {
Py_DECREF (self->partlist);
self->partlist = NULL;
}
Py_RETURN_NONE;
}
#ifdef HAVE_BLKID_2_31
PyDoc_STRVAR(Probe_hide_range__doc__,
"hide_range (offset, length)\n\n" \
"This function modifies in-memory cached data from the device. The specified range is zeroized. "
"This is usable together with Probe.step_back(). The next Probe.do_probe() will not see specified area.\n"
"Note that this is usable for already (by library) read data, and this function is not a way "
"how to hide any large areas on your device.\n"
"The function Probe.reset_buffers() reverts all.");
static PyObject *Probe_hide_range (ProbeObject *self, PyObject *args, PyObject *kwargs) {
int ret = 0;
char *kwlist[] = { "offset", "length", NULL };
uint64_t offset = 0;
uint64_t length = 0;
if (!PyArg_ParseTupleAndKeywords(args, kwargs, "ii", kwlist, &offset, &length)) {
return NULL;
}
ret = blkid_probe_hide_range (self->probe, offset, length);
if (ret != 0) {
PyErr_SetString (PyExc_RuntimeError, "Failed to hide range");
return NULL;
}
Py_RETURN_NONE;
}
#endif
#ifdef HAVE_BLKID_2_40
PyDoc_STRVAR(Probe_wipe_all__doc__,
"wipe_all ()\n\n"
"This function erases all detectable signatures from probe. The probe has to be open in O_RDWR mode. "
"All other necessary configurations will be enabled automatically.");
static PyObject *Probe_wipe_all (ProbeObject *self, PyObject *Py_UNUSED (ignored)) {
int ret = 0;
ret = blkid_wipe_all (self->probe);
if (ret < 0) {
PyErr_SetString (PyExc_RuntimeError, "Failed to probe the device");
return NULL;
}
Py_RETURN_NONE;
}
#endif
PyDoc_STRVAR(Probe_do_wipe__doc__,
"do_wipe (dryrun=False)\n\n"
"This function erases the current signature detected by the probe. The probe has to be open in "
"O_RDWR mode, blkid.SUBLKS_MAGIC or/and blkid.PARTS_MAGIC flags has to be enabled (if you want "
"to erase also superblock with broken check sums then use blkid.SUBLKS_BADCSUM too).\n\n"
"After successful signature removing the probe prober will be moved one step back and the next "
"do_probe() call will again call previously called probing function. All in-memory cached data "
"from the device are always reset.");
static PyObject *Probe_do_wipe (ProbeObject *self, PyObject *args, PyObject *kwargs) {
int ret = 0;
char *kwlist[] = { "dryrun", NULL };
bool dryrun = false;
if (!PyArg_ParseTupleAndKeywords(args, kwargs, "|p", kwlist, &dryrun)) {
return NULL;
}
ret = blkid_do_wipe (self->probe, dryrun);
if (ret != 0) {
PyErr_Format (PyExc_RuntimeError, "Failed to wipe the device: %s", strerror (errno));
return NULL;
}
Py_RETURN_NONE;
}
static PyObject * probe_to_dict (ProbeObject *self) {
PyObject *dict = NULL;
int ret = 0;
int nvalues = 0;
const char *name = NULL;
const char *value = NULL;
PyObject *py_value = NULL;
ret = blkid_probe_numof_values (self->probe);
if (ret < 0) {
PyErr_SetString (PyExc_RuntimeError, "Failed to get probe results");
return NULL;
}
nvalues = ret;
dict = PyDict_New ();
if (!dict) {
PyErr_NoMemory ();
return NULL;
}
for (int i = 0; i < nvalues; i++) {
ret = blkid_probe_get_value (self->probe, i, &name, &value, NULL);
if (ret < 0) {
PyErr_SetString (PyExc_RuntimeError, "Failed to get probe results");
return NULL;
}
py_value = PyUnicode_FromString (value);
if (py_value == NULL) {
Py_INCREF (Py_None);
py_value = Py_None;
}
PyDict_SetItemString (dict, name, py_value);
Py_DECREF (py_value);
}
return dict;
}
PyDoc_STRVAR(Probe_items__doc__,
"items ()\n");
static PyObject *Probe_items (ProbeObject *self, PyObject *Py_UNUSED (ignored)) {
PyObject *dict = probe_to_dict (self);
if (PyErr_Occurred ())
return NULL;
PyObject *ret = PyDict_Items (dict);
PyDict_Clear (dict);
return ret;
}
PyDoc_STRVAR(Probe_values__doc__,
"values ()\n");
static PyObject *Probe_values (ProbeObject *self, PyObject *Py_UNUSED (ignored)) {
PyObject *dict = probe_to_dict (self);
if (PyErr_Occurred ())
return NULL;
PyObject *ret = PyDict_Values (dict);
PyDict_Clear (dict);
return ret;
}
PyDoc_STRVAR(Probe_keys__doc__,
"keys ()\n");
static PyObject *Probe_keys (ProbeObject *self, PyObject *Py_UNUSED (ignored)) {
PyObject *dict = probe_to_dict (self);
if (PyErr_Occurred ())
return NULL;
PyObject *ret = PyDict_Keys (dict);
PyDict_Clear (dict);
return ret;
}
static PyMethodDef Probe_methods[] = {
{"set_device", (PyCFunction)(void(*)(void)) Probe_set_device, METH_VARARGS|METH_KEYWORDS, Probe_set_device__doc__},
{"do_safeprobe", (PyCFunction) Probe_do_safeprobe, METH_NOARGS, Probe_do_safeprobe__doc__},
{"do_fullprobe", (PyCFunction) Probe_do_fullprobe, METH_NOARGS, Probe_do_fullprobe__doc__},
{"do_probe", (PyCFunction) Probe_do_probe, METH_NOARGS, Probe_do_probe__doc__},
{"step_back", (PyCFunction) Probe_step_back, METH_NOARGS, Probe_step_back__doc__},
#ifdef HAVE_BLKID_2_31
{"reset_buffers", (PyCFunction) Probe_reset_buffers, METH_NOARGS, Probe_reset_buffers__doc__},
#endif
{"reset_probe", (PyCFunction) Probe_reset_probe, METH_NOARGS, Probe_reset_probe__doc__},
#ifdef HAVE_BLKID_2_31
{"hide_range", (PyCFunction)(void(*)(void)) Probe_hide_range, METH_VARARGS|METH_KEYWORDS, Probe_hide_range__doc__},
#endif
#ifdef HAVE_BLKID_2_40
{"wipe_all", (PyCFunction) Probe_wipe_all, METH_NOARGS, Probe_wipe_all__doc__},
#endif
{"do_wipe", (PyCFunction)(void(*)(void)) Probe_do_wipe, METH_VARARGS|METH_KEYWORDS, Probe_do_wipe__doc__},
{"enable_partitions", (PyCFunction)(void(*)(void)) Probe_enable_partitions, METH_VARARGS|METH_KEYWORDS, Probe_enable_partitions__doc__},
{"set_partitions_flags", (PyCFunction)(void(*)(void)) Probe_set_partitions_flags, METH_VARARGS|METH_KEYWORDS, Probe_set_partitions_flags__doc__},
{"filter_partitions_type", (PyCFunction)(void(*)(void)) Probe_filter_partitions_type, METH_VARARGS|METH_KEYWORDS, Probe_filter_partitions_type__doc__},
{"invert_partitions_filter", (PyCFunction) Probe_invert_partitions_filter, METH_NOARGS, Probe_invert_partitions_filter__doc__},
{"reset_partitions_filter", (PyCFunction) Probe_reset_partitions_filter, METH_NOARGS, Probe_reset_partitions_filter__doc__},
{"enable_topology", (PyCFunction)(void(*)(void)) Probe_enable_topology, METH_VARARGS|METH_KEYWORDS, Probe_enable_topology__doc__},
{"enable_superblocks", (PyCFunction)(void(*)(void)) Probe_enable_superblocks, METH_VARARGS|METH_KEYWORDS, Probe_enable_superblocks__doc__},
{"filter_superblocks_type", (PyCFunction)(void(*)(void)) Probe_filter_superblocks_type, METH_VARARGS|METH_KEYWORDS, Probe_filter_superblocks_type__doc__},
{"filter_superblocks_usage", (PyCFunction)(void(*)(void)) Probe_filter_superblocks_usage, METH_VARARGS|METH_KEYWORDS, Probe_filter_superblocks_usage__doc__},
{"set_superblocks_flags", (PyCFunction)(void(*)(void)) Probe_set_superblocks_flags, METH_VARARGS|METH_KEYWORDS, Probe_set_superblocks_flags__doc__},
{"invert_superblocks_filter", (PyCFunction) Probe_invert_superblocks_filter, METH_NOARGS, Probe_invert_superblocks_filter__doc__},
{"reset_superblocks_filter", (PyCFunction) Probe_reset_superblocks_filter, METH_NOARGS, Probe_reset_superblocks_filter__doc__},
{"lookup_value", (PyCFunction)(void(*)(void)) Probe_lookup_value, METH_VARARGS|METH_KEYWORDS, Probe_lookup_value__doc__},
{"items", (PyCFunction) Probe_items, METH_NOARGS, Probe_items__doc__},
{"values", (PyCFunction) Probe_values, METH_NOARGS, Probe_values__doc__},
{"keys", (PyCFunction) Probe_keys, METH_NOARGS, Probe_keys__doc__},
{NULL, NULL, 0, NULL}
};
static PyObject *Probe_get_devno (ProbeObject *self, PyObject *Py_UNUSED (ignored)) {
dev_t devno = blkid_probe_get_devno (self->probe);
return PyLong_FromUnsignedLong (devno);
}
static PyObject *Probe_get_fd (ProbeObject *self, PyObject *Py_UNUSED (ignored)) {
return PyLong_FromLong (self->fd);
}
static PyObject *Probe_get_offset (ProbeObject *self, PyObject *Py_UNUSED (ignored)) {
blkid_loff_t offset = blkid_probe_get_offset (self->probe);
return PyLong_FromLongLong (offset);
}
static PyObject *Probe_get_sectors (ProbeObject *self, PyObject *Py_UNUSED (ignored)) {
blkid_loff_t sectors = blkid_probe_get_sectors (self->probe);
return PyLong_FromLongLong (sectors);
}
static PyObject *Probe_get_size (ProbeObject *self, PyObject *Py_UNUSED (ignored)) {
blkid_loff_t size = blkid_probe_get_size (self->probe);
return PyLong_FromLongLong (size);
}
static PyObject *Probe_get_sector_size (ProbeObject *self, PyObject *Py_UNUSED (ignored)) {
unsigned int sector_size = blkid_probe_get_sectorsize (self->probe);
return PyLong_FromUnsignedLong (sector_size);
}
#ifdef HAVE_BLKID_2_30
static int Probe_set_sector_size (ProbeObject *self, PyObject *value, void *closure UNUSED) {
unsigned int sector_size = 0;
int ret = 0;
if (!PyLong_Check (value)) {
PyErr_SetString (PyExc_TypeError, "Invalid argument");
return -1;
}
sector_size = PyLong_AsLong (value);
ret = blkid_probe_set_sectorsize (self->probe, sector_size);
if (ret != 0) {
PyErr_Format (PyExc_RuntimeError, "Failed to set sector size");
return -1;
}
return 0;
}
#endif
static PyObject *Probe_get_wholedisk_devno (ProbeObject *self, PyObject *Py_UNUSED (ignored)) {
dev_t devno = blkid_probe_get_wholedisk_devno (self->probe);
return PyLong_FromUnsignedLong (devno);
}
static PyObject *Probe_get_is_wholedisk (ProbeObject *self __attribute__((unused)), PyObject *Py_UNUSED (ignored)) {
int wholedisk = blkid_probe_is_wholedisk (self->probe);
return PyBool_FromLong (wholedisk);
}
static PyObject *Probe_get_topology (ProbeObject *self, PyObject *Py_UNUSED (ignored)) {
if (self->topology) {
Py_INCREF (self->topology);
return self->topology;
}
self->topology = _Topology_get_topology_object (self->probe);
return self->topology;
}
static PyObject *Probe_get_partitions (ProbeObject *self, PyObject *Py_UNUSED (ignored)) {
if (self->partlist) {
Py_INCREF (self->partlist);
return self->partlist;
}
self->partlist = _Partlist_get_partlist_object (self->probe);
return self->partlist;
}
static PyGetSetDef Probe_getseters[] = {
{"devno", (getter) Probe_get_devno, NULL, "block device number, or 0 for regular files", NULL},
{"fd", (getter) Probe_get_fd, NULL, "file descriptor for assigned device/file or -1 in case of error", NULL},
{"offset", (getter) Probe_get_offset, NULL, "offset of probing area as defined by Probe.set_device() or -1 in case of error", NULL},
{"sectors", (getter) Probe_get_sectors, NULL, "512-byte sector count or -1 in case of error", NULL},
{"size", (getter) Probe_get_size, NULL, "size of probing area as defined by Probe.set_device()", NULL},
#ifdef HAVE_BLKID_2_30
{"sector_size", (getter) Probe_get_sector_size, (setter) Probe_set_sector_size, "block device logical sector size (BLKSSZGET ioctl, default 512).", NULL},
#else
{"sector_size", (getter) Probe_get_sector_size, NULL, "block device logical sector size (BLKSSZGET ioctl, default 512).", NULL},
#endif
{"wholedisk_devno", (getter) Probe_get_wholedisk_devno, NULL, "device number of the wholedisk, or 0 for regular files", NULL},
{"is_wholedisk", (getter) Probe_get_is_wholedisk, NULL, "True if the device is whole-disk, False otherwise", NULL},
{"topology", (getter) Probe_get_topology, NULL, "binary interface for topology values", NULL},
{"partitions", (getter) Probe_get_partitions, NULL, "binary interface for partitions", NULL},
{NULL, NULL, NULL, NULL, NULL}
};
static Py_ssize_t Probe_len (ProbeObject *self) {
int ret = 0;
ret = blkid_probe_numof_values (self->probe);
if (ret < 0)
return 0;
return (Py_ssize_t) ret;
}
static PyObject * Probe_getitem (ProbeObject *self, PyObject *item) {
int ret = 0;
const char *key = NULL;
const char *value = NULL;
if (!PyUnicode_Check (item)) {
PyErr_SetObject(PyExc_KeyError, item);
return NULL;
}
key = PyUnicode_AsUTF8 (item);
ret = blkid_probe_lookup_value (self->probe, key, &value, NULL);
if (ret != 0) {
PyErr_SetObject (PyExc_KeyError, item);
return NULL;
}
return PyBytes_FromString (value);
}
PyMappingMethods ProbeMapping = {
.mp_length = (lenfunc) Probe_len,
.mp_subscript = (binaryfunc) Probe_getitem,
};
PyTypeObject ProbeType = {
PyVarObject_HEAD_INIT (NULL, 0)
.tp_name = "blkid.Probe",
.tp_basicsize = sizeof (ProbeObject),
.tp_itemsize = 0,
.tp_flags = Py_TPFLAGS_DEFAULT,
.tp_new = Probe_new,
.tp_dealloc = (destructor) Probe_dealloc,
.tp_init = (initproc) Probe_init,
.tp_methods = Probe_methods,
.tp_getset = Probe_getseters,
.tp_as_mapping = &ProbeMapping,
};

View File

@ -0,0 +1,39 @@
/*
* Copyright (C) 2020 Red Hat, Inc.
*
* This library is free software; you can redistribute it and/or
* modify it under the terms of the GNU Lesser General Public
* License as published by the Free Software Foundation; either
* version 2.1 of the License, or (at your option) any later version.
*
* This library is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
* Lesser General Public License for more details.
*
* You should have received a copy of the GNU Lesser General Public
* License along with this library; if not, see <http://www.gnu.org/licenses/>.
*
*/
#ifndef PROBE_H
#define PROBE_H
#include <Python.h>
#include <blkid/blkid.h>
typedef struct {
PyObject_HEAD
blkid_probe probe;
PyObject *topology;
PyObject *partlist;
int fd;
} ProbeObject;
extern PyTypeObject ProbeType;
PyObject *Probe_new (PyTypeObject *type, PyObject *args, PyObject *kwargs);
int Probe_init (ProbeObject *self, PyObject *args, PyObject *kwargs);
void Probe_dealloc (ProbeObject *self);
#endif /* PROBE_H */

View File

@ -0,0 +1,631 @@
/*
* Copyright (C) 2020 Red Hat, Inc.
*
* This library is free software; you can redistribute it and/or
* modify it under the terms of the GNU Lesser General Public
* License as published by the Free Software Foundation; either
* version 2.1 of the License, or (at your option) any later version.
*
* This library is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
* Lesser General Public License for more details.
*
* You should have received a copy of the GNU Lesser General Public
* License along with this library; if not, see <http://www.gnu.org/licenses/>.
*
*/
#include "pyblkid.h"
#include "probe.h"
#include "topology.h"
#include "partitions.h"
#include "cache.h"
#include <blkid/blkid.h>
#include <errno.h>
#include <fcntl.h>
#define UNUSED __attribute__((unused))
PyDoc_STRVAR(Blkid_init_debug__doc__,
"init_debug (mask)\n\n"
"If the mask is not specified then this function reads LIBBLKID_DEBUG environment variable to get the mask.\n"
"Already initialized debugging stuff cannot be changed. It does not have effect to call this function twice.\n\n"
"Use '0xffff' to enable full debugging.\n");
static PyObject *Blkid_init_debug (PyObject *self UNUSED, PyObject *args, PyObject *kwargs) {
int mask = 0;
char *kwlist[] = { "mask", NULL };
if (!PyArg_ParseTupleAndKeywords (args, kwargs, "|i", kwlist, &mask))
return NULL;
blkid_init_debug (mask);
Py_RETURN_NONE;
}
PyDoc_STRVAR(Blkid_known_fstype__doc__,
"known_fstype (fstype)\n\n"
"Returns whether fstype is a known filesystem type or not.\n");
static PyObject *Blkid_known_fstype (PyObject *self UNUSED, PyObject *args, PyObject *kwargs) {
const char *fstype = NULL;
char *kwlist[] = { "fstype", NULL };
if (!PyArg_ParseTupleAndKeywords (args, kwargs, "s", kwlist, &fstype))
return NULL;
return PyBool_FromLong (blkid_known_fstype (fstype));
}
PyDoc_STRVAR(Blkid_send_uevent__doc__,
"send_uevent (devname, action)\n\n");
static PyObject *Blkid_send_uevent (PyObject *self UNUSED, PyObject *args, PyObject *kwargs) {
const char *devname = NULL;
const char *action = NULL;
char *kwlist[] = { "devname", "action", NULL };
int ret = 0;
if (!PyArg_ParseTupleAndKeywords (args, kwargs, "ss", kwlist, &devname, &action))
return NULL;
ret = blkid_send_uevent (devname, action);
if (ret < 0) {
PyErr_Format (PyExc_RuntimeError, "Failed to send %s uevent do device '%s'", action, devname);
return NULL;
}
Py_RETURN_NONE;
}
PyDoc_STRVAR(Blkid_known_pttype__doc__,
"known_pttype (pttype)\n\n"
"Returns whether pttype is a known partition type or not.\n");
static PyObject *Blkid_known_pttype (PyObject *self UNUSED, PyObject *args, PyObject *kwargs) {
const char *pttype = NULL;
char *kwlist[] = { "pttype", NULL };
if (!PyArg_ParseTupleAndKeywords (args, kwargs, "s", kwlist, &pttype))
return NULL;
return PyBool_FromLong (blkid_known_pttype (pttype));
}
static int _Py_Dev_Converter (PyObject *obj, void *p) {
#ifdef HAVE_LONG_LONG
*((dev_t *)p) = PyLong_AsUnsignedLongLong (obj);
#else
*((dev_t *)p) = PyLong_AsUnsignedLong (obj);
#endif
if (PyErr_Occurred ())
return 0;
return 1;
}
#ifdef HAVE_LONG_LONG
#define _PyLong_FromDev PyLong_FromLongLong
#else
#define _PyLong_FromDev PyLong_FromLong
#endif
PyDoc_STRVAR(Blkid_devno_to_devname__doc__,
"devno_to_devname (devno)\n\n"
"This function finds the pathname to a block device with a given device number.\n");
static PyObject *Blkid_devno_to_devname (PyObject *self UNUSED, PyObject *args, PyObject *kwargs) {
dev_t devno = 0;
char *kwlist[] = { "devno", NULL };
char *devname = NULL;
PyObject *ret = NULL;
if (!PyArg_ParseTupleAndKeywords (args, kwargs, "O&:devno_to_devname", kwlist, _Py_Dev_Converter, &devno))
return NULL;
devname = blkid_devno_to_devname (devno);
if (!devname) {
PyErr_SetString (PyExc_RuntimeError, "Failed to get devname");
return NULL;
}
ret = PyUnicode_FromString (devname);
free (devname);
return ret;
}
PyDoc_STRVAR(Blkid_devno_to_wholedisk__doc__,
"devno_to_wholedisk (devno)\n\n"
"This function uses sysfs to convert the devno device number to the name and devno of the whole disk.");
static PyObject *Blkid_devno_to_wholedisk (PyObject *self UNUSED, PyObject *args, PyObject *kwargs) {
dev_t devno = 0;
dev_t diskdevno = 0;
char *kwlist[] = { "devno", NULL };
#ifdef HAVE_BLKID_2_28
char diskname[32];
#else
char diskname[PATH_MAX];
#endif
int ret = 0;
PyObject *tuple = NULL;
PyObject *py_name = NULL;
PyObject *py_devno = NULL;
if (!PyArg_ParseTupleAndKeywords (args, kwargs, "O&:devno_to_wholedisk", kwlist, _Py_Dev_Converter, &devno))
return NULL;
#ifdef HAVE_BLKID_2_28
ret = blkid_devno_to_wholedisk (devno, diskname, 32, &diskdevno);
#else
ret = blkid_devno_to_wholedisk (devno, diskname, PATH_MAX, &diskdevno);
#endif
if (ret != 0) {
PyErr_SetString (PyExc_RuntimeError, "Failed to get whole disk name");
return NULL;
}
tuple = PyTuple_New (2);
py_name = PyUnicode_FromString (diskname);
if (py_name == NULL) {
Py_INCREF (Py_None);
py_name = Py_None;
}
PyTuple_SetItem (tuple, 0, py_name);
py_devno = _PyLong_FromDev (diskdevno);
if (py_devno == NULL) {
Py_INCREF (Py_None);
py_devno = Py_None;
}
PyTuple_SetItem (tuple, 1, py_devno);
return tuple;
}
PyDoc_STRVAR(Blkid_parse_version_string__doc__,
"parse_version_string (version)\n\n"
"Convert version string (e.g. '2.16.0') to release version code (e.g. '2160').\n");
static PyObject *Blkid_parse_version_string (PyObject *self UNUSED, PyObject *args, PyObject *kwargs) {
char *ver_str = NULL;
char *kwlist[] = { "version", NULL };
int ret = 0;
if (!PyArg_ParseTupleAndKeywords (args, kwargs, "s", kwlist, &ver_str))
return NULL;
ret = blkid_parse_version_string (ver_str);
return PyLong_FromLong (ret);
}
PyDoc_STRVAR(Blkid_get_library_version__doc__,
"get_library_version ()\n\n"
"Returns tuple of release version code (int), version string and date.\n");
static PyObject *Blkid_get_library_version (ProbeObject *self UNUSED, PyObject *Py_UNUSED (ignored)) {
const char *ver_str = NULL;
const char *date = NULL;
int ver_code = 0;
PyObject *ret = NULL;
PyObject *py_code = NULL;
PyObject *py_ver = NULL;
PyObject *py_date = NULL;
ver_code = blkid_get_library_version (&ver_str, &date);
ret = PyTuple_New (3);
py_code = PyLong_FromLong (ver_code);
PyTuple_SetItem (ret, 0, py_code);
py_ver = PyUnicode_FromString (ver_str);
if (py_ver == NULL) {
Py_INCREF (Py_None);
py_ver = Py_None;
}
PyTuple_SetItem (ret, 1, py_ver);
py_date = PyUnicode_FromString (date);
if (py_date == NULL) {
Py_INCREF (Py_None);
py_date = Py_None;
}
PyTuple_SetItem (ret, 2, py_date);
return ret;
}
PyDoc_STRVAR(Blkid_parse_tag_string__doc__,
"parse_tag_string (tag)\n\n"
"Parse a 'NAME=value' string, returns tuple of type and value.\n");
static PyObject *Blkid_parse_tag_string (PyObject *self UNUSED, PyObject *args, PyObject *kwargs) {
char *tag_str = NULL;
char *kwlist[] = { "tag", NULL };
int ret = 0;
char *type = NULL;
char *value = NULL;
PyObject *py_type = NULL;
PyObject *py_value = NULL;
PyObject *tuple = NULL;
if (!PyArg_ParseTupleAndKeywords (args, kwargs, "s", kwlist, &tag_str))
return NULL;
ret = blkid_parse_tag_string (tag_str, &type, &value);
if (ret < 0) {
PyErr_Format (PyExc_RuntimeError, "Failed to parse tag '%s'", tag_str);
return NULL;
}
tuple = PyTuple_New (2);
py_type = PyUnicode_FromString (type);
if (py_type == NULL) {
Py_INCREF (Py_None);
py_type = Py_None;
}
PyTuple_SetItem (tuple, 0, py_type);
free (type);
py_value = PyUnicode_FromString (value);
if (py_value == NULL) {
Py_INCREF (Py_None);
py_value = Py_None;
}
PyTuple_SetItem (tuple, 1, py_value);
free (value);
return tuple;
}
PyDoc_STRVAR(Blkid_get_dev_size__doc__,
"get_dev_size (device)\n\n"
"Returns size (in bytes) of the block device or size of the regular file.\n");
static PyObject *Blkid_get_dev_size (PyObject *self UNUSED, PyObject *args, PyObject *kwargs) {
char *device = NULL;
char *kwlist[] = { "device", NULL };
blkid_loff_t ret = 0;
int fd = 0;
if (!PyArg_ParseTupleAndKeywords (args, kwargs, "s", kwlist, &device))
return NULL;
fd = open (device, O_RDONLY|O_CLOEXEC);
if (fd == -1) {
PyErr_Format (PyExc_OSError, "Failed to open device '%s': %s", device, strerror (errno));
return NULL;
}
ret = blkid_get_dev_size (fd);
if (ret == 0) {
PyErr_Format (PyExc_RuntimeError, "Failed to get size of device '%s'", device);
close (fd);
return NULL;
}
close (fd);
return PyLong_FromLongLong (ret);
}
PyDoc_STRVAR(Blkid_encode_string__doc__,
"encode_string (string)\n\n"
"Encode all potentially unsafe characters of a string to the corresponding hex value prefixed by '\\x'.\n");
static PyObject *Blkid_encode_string (PyObject *self UNUSED, PyObject *args, PyObject *kwargs) {
char *string = NULL;
char *kwlist[] = { "string", NULL };
char *encoded_string = NULL;
int ret = 0;
size_t inlen = 0;
size_t outlen = 0;
PyObject *py_ret = NULL;
if (!PyArg_ParseTupleAndKeywords (args, kwargs, "s", kwlist, &string))
return NULL;
inlen = strlen (string);
outlen = inlen * 4;
encoded_string = malloc (sizeof (char) * (outlen + 1 ));
ret = blkid_encode_string (string, encoded_string, outlen);
if (ret != 0) {
PyErr_Format (PyExc_RuntimeError, "Failed to encode string");
free (encoded_string);
return NULL;
}
py_ret = PyUnicode_FromString (encoded_string);
free (encoded_string);
return py_ret;
}
PyDoc_STRVAR(Blkid_safe_string__doc__,
"safe_string (string)\n\n"
"Allows plain ascii, hex-escaping and valid utf8. Replaces all whitespaces with '_'.\n");
static PyObject *Blkid_safe_string (PyObject *self UNUSED, PyObject *args, PyObject *kwargs) {
char *string = NULL;
char *kwlist[] = { "string", NULL };
char *safe_string = NULL;
int ret = 0;
size_t inlen = 0;
size_t outlen = 0;
PyObject *py_ret = NULL;
if (!PyArg_ParseTupleAndKeywords (args, kwargs, "s", kwlist, &string))
return NULL;
inlen = strlen (string);
outlen = inlen * 4;
safe_string = malloc (sizeof (char) * (outlen + 1 ));
ret = blkid_safe_string (string, safe_string, outlen);
if (ret != 0) {
PyErr_Format (PyExc_RuntimeError, "Failed to make safe string");
free (safe_string);
return NULL;
}
py_ret = PyUnicode_FromString (safe_string);
free (safe_string);
return py_ret;
}
#ifdef HAVE_BLKID_2_30
PyDoc_STRVAR(Blkid_partition_types__doc__,
"partition_types ()\n\n"
"List of supported partition types.\n");
static PyObject *Blkid_partition_types (ProbeObject *self UNUSED, PyObject *Py_UNUSED (ignored)) {
PyObject *ret = NULL;
PyObject *py_name = NULL;
size_t idx = 0;
const char *name = NULL;
ret = PyList_New (0);
while (blkid_partitions_get_name (idx++, &name) == 0) {
py_name = PyUnicode_FromString (name);
if (py_name != NULL)
PyList_Append (ret, py_name);
}
return ret;
}
#endif
PyDoc_STRVAR(Blkid_superblocks__doc__,
"superblocks ()\n\n"
"List of supported superblocks.\n");
static PyObject *Blkid_superblocks (ProbeObject *self UNUSED, PyObject *Py_UNUSED (ignored)) {
PyObject *ret = NULL;
PyObject *py_name = NULL;
size_t idx = 0;
const char *name = NULL;
ret = PyList_New (0);
while (blkid_superblocks_get_name (idx++, &name, NULL) == 0) {
py_name = PyUnicode_FromString (name);
if (py_name != NULL)
PyList_Append (ret, py_name);
}
return ret;
}
PyDoc_STRVAR(Blkid_evaluate_tag__doc__,
"evaluate_tag (token, value)\n\n"
"Get device name that match the specified token (e.g \"LABEL\" or \"UUID\") and token value.\n"
"The evaluation could be controlled by the /etc/blkid.conf config file. The default is to try \"udev\" and then \"scan\" method.\n");
static PyObject *Blkid_evaluate_tag (PyObject *self UNUSED, PyObject *args, PyObject *kwargs) {
char *token = NULL;
char *value = NULL;
char *kwlist[] = { "token", "value", NULL };
PyObject *py_ret = NULL;
char *ret = NULL;
if (!PyArg_ParseTupleAndKeywords (args, kwargs, "ss", kwlist, &token, &value))
return NULL;
ret = blkid_evaluate_tag (token, value, NULL);
if (ret == NULL) {
Py_INCREF (Py_None);
py_ret = Py_None;
} else {
py_ret = PyUnicode_FromString (ret);
free (ret);
}
return py_ret;
}
PyDoc_STRVAR(Blkid_evaluate_spec__doc__,
"evaluate_spec (spec)\n\n"
"Get device name that match the unparsed tag (e.g. \"LABEL=foo\") or path (e.g. /dev/dm-0)\n"
"The evaluation could be controlled by the /etc/blkid.conf config file. The default is to try \"udev\" and then \"scan\" method.\n");
static PyObject *Blkid_evaluate_spec (PyObject *self UNUSED, PyObject *args, PyObject *kwargs) {
char *spec = NULL;
char *kwlist[] = { "spec", NULL };
PyObject *py_ret = NULL;
char *ret = NULL;
if (!PyArg_ParseTupleAndKeywords (args, kwargs, "s", kwlist, &spec))
return NULL;
ret = blkid_evaluate_spec (spec, NULL);
if (ret == NULL) {
Py_INCREF (Py_None);
py_ret = Py_None;
} else {
py_ret = PyUnicode_FromString (ret);
free (ret);
}
return py_ret;
}
static PyMethodDef BlkidMethods[] = {
{"init_debug", (PyCFunction)(void(*)(void)) Blkid_init_debug, METH_VARARGS|METH_KEYWORDS, Blkid_init_debug__doc__},
{"known_fstype", (PyCFunction)(void(*)(void)) Blkid_known_fstype, METH_VARARGS|METH_KEYWORDS, Blkid_known_fstype__doc__},
{"send_uevent", (PyCFunction)(void(*)(void)) Blkid_send_uevent, METH_VARARGS|METH_KEYWORDS, Blkid_send_uevent__doc__},
{"devno_to_devname", (PyCFunction)(void(*)(void)) Blkid_devno_to_devname, METH_VARARGS|METH_KEYWORDS, Blkid_devno_to_devname__doc__},
{"devno_to_wholedisk", (PyCFunction)(void(*)(void)) Blkid_devno_to_wholedisk, METH_VARARGS|METH_KEYWORDS, Blkid_devno_to_wholedisk__doc__},
{"known_pttype", (PyCFunction)(void(*)(void)) Blkid_known_pttype, METH_VARARGS|METH_KEYWORDS, Blkid_known_pttype__doc__},
{"parse_version_string", (PyCFunction)(void(*)(void)) Blkid_parse_version_string, METH_VARARGS|METH_KEYWORDS, Blkid_parse_version_string__doc__},
{"get_library_version", (PyCFunction) Blkid_get_library_version, METH_NOARGS, Blkid_get_library_version__doc__},
{"parse_tag_string", (PyCFunction)(void(*)(void)) Blkid_parse_tag_string, METH_VARARGS|METH_KEYWORDS, Blkid_parse_tag_string__doc__},
{"get_dev_size", (PyCFunction)(void(*)(void)) Blkid_get_dev_size, METH_VARARGS|METH_KEYWORDS, Blkid_get_dev_size__doc__},
{"encode_string", (PyCFunction)(void(*)(void)) Blkid_encode_string, METH_VARARGS|METH_KEYWORDS, Blkid_encode_string__doc__},
{"safe_string", (PyCFunction)(void(*)(void)) Blkid_safe_string, METH_VARARGS|METH_KEYWORDS, Blkid_safe_string__doc__},
#ifdef HAVE_BLKID_2_30
{"partition_types", (PyCFunction) Blkid_partition_types, METH_NOARGS, Blkid_partition_types__doc__},
#endif
{"superblocks", (PyCFunction) Blkid_superblocks, METH_NOARGS, Blkid_superblocks__doc__},
{"evaluate_tag", (PyCFunction)(void(*)(void)) Blkid_evaluate_tag, METH_VARARGS|METH_KEYWORDS, Blkid_evaluate_tag__doc__},
{"evaluate_spec", (PyCFunction)(void(*)(void)) Blkid_evaluate_spec, METH_VARARGS|METH_KEYWORDS, Blkid_evaluate_spec__doc__},
{NULL, NULL, 0, NULL}
};
static struct PyModuleDef blkidmodule = {
PyModuleDef_HEAD_INIT,
.m_name = "blkid",
.m_doc = "Python interface for the libblkid C library",
.m_size = -1,
.m_methods = BlkidMethods,
};
PyMODINIT_FUNC PyInit_blkid (void) {
PyObject *module = NULL;
if (PyType_Ready (&ProbeType) < 0)
return NULL;
if (PyType_Ready (&TopologyType) < 0)
return NULL;
if (PyType_Ready (&PartlistType) < 0)
return NULL;
if (PyType_Ready (&ParttableType) < 0)
return NULL;
if (PyType_Ready (&PartitionType) < 0)
return NULL;
if (PyType_Ready (&CacheType) < 0)
return NULL;
if (PyType_Ready (&DeviceType) < 0)
return NULL;
module = PyModule_Create (&blkidmodule);
if (!module)
return NULL;
PyModule_AddIntConstant (module, "FLTR_NOTIN", BLKID_FLTR_NOTIN);
PyModule_AddIntConstant (module, "FLTR_ONLYIN", BLKID_FLTR_ONLYIN);
PyModule_AddIntConstant (module, "DEV_CREATE", BLKID_DEV_CREATE);
PyModule_AddIntConstant (module, "DEV_FIND", BLKID_DEV_FIND);
PyModule_AddIntConstant (module, "DEV_NORMAL", BLKID_DEV_NORMAL);
PyModule_AddIntConstant (module, "DEV_VERIFY", BLKID_DEV_VERIFY);
PyModule_AddIntConstant (module, "PARTS_ENTRY_DETAILS", BLKID_PARTS_ENTRY_DETAILS);
PyModule_AddIntConstant (module, "PARTS_FORCE_GPT", BLKID_PARTS_FORCE_GPT);
PyModule_AddIntConstant (module, "PARTS_MAGIC", BLKID_PARTS_MAGIC);
#ifdef HAVE_BLKID_2_24
PyModule_AddIntConstant (module, "SUBLKS_BADCSUM", BLKID_SUBLKS_BADCSUM);
#endif
PyModule_AddIntConstant (module, "SUBLKS_DEFAULT", BLKID_SUBLKS_DEFAULT);
#ifdef HAVE_BLKID_2_39
PyModule_AddIntConstant (module, "SUBLKS_FSINFO", BLKID_SUBLKS_FSINFO);
#endif
PyModule_AddIntConstant (module, "SUBLKS_LABEL", BLKID_SUBLKS_LABEL);
PyModule_AddIntConstant (module, "SUBLKS_LABELRAW", BLKID_SUBLKS_LABELRAW);
PyModule_AddIntConstant (module, "SUBLKS_MAGIC", BLKID_SUBLKS_MAGIC);
PyModule_AddIntConstant (module, "SUBLKS_SECTYPE", BLKID_SUBLKS_SECTYPE);
PyModule_AddIntConstant (module, "SUBLKS_TYPE", BLKID_SUBLKS_TYPE);
PyModule_AddIntConstant (module, "SUBLKS_USAGE", BLKID_SUBLKS_USAGE);
PyModule_AddIntConstant (module, "SUBLKS_UUID", BLKID_SUBLKS_UUID);
PyModule_AddIntConstant (module, "SUBLKS_UUIDRAW", BLKID_SUBLKS_UUIDRAW);
PyModule_AddIntConstant (module, "SUBLKS_VERSION", BLKID_SUBLKS_VERSION);
PyModule_AddIntConstant (module, "USAGE_CRYPTO", BLKID_USAGE_CRYPTO);
PyModule_AddIntConstant (module, "USAGE_FILESYSTEM", BLKID_USAGE_FILESYSTEM);
PyModule_AddIntConstant (module, "USAGE_OTHER", BLKID_USAGE_OTHER);
PyModule_AddIntConstant (module, "USAGE_RAID", BLKID_USAGE_RAID);
Py_INCREF (&ProbeType);
if (PyModule_AddObject (module, "Probe", (PyObject *) &ProbeType) < 0) {
Py_DECREF (&ProbeType);
Py_DECREF (module);
return NULL;
}
Py_INCREF (&TopologyType);
if (PyModule_AddObject (module, "Topology", (PyObject *) &TopologyType) < 0) {
Py_DECREF (&ProbeType);
Py_DECREF (&TopologyType);
Py_DECREF (module);
return NULL;
}
Py_INCREF (&PartlistType);
if (PyModule_AddObject (module, "Partlist", (PyObject *) &PartlistType) < 0) {
Py_DECREF (&ProbeType);
Py_DECREF (&TopologyType);
Py_DECREF (&PartlistType);
Py_DECREF (module);
return NULL;
}
Py_INCREF (&ParttableType);
if (PyModule_AddObject (module, "Parttable", (PyObject *) &ParttableType) < 0) {
Py_DECREF (&ProbeType);
Py_DECREF (&TopologyType);
Py_DECREF (&PartlistType);
Py_DECREF (&ParttableType);
Py_DECREF (module);
return NULL;
}
Py_INCREF (&PartitionType);
if (PyModule_AddObject (module, "Partition", (PyObject *) &PartitionType) < 0) {
Py_DECREF (&ProbeType);
Py_DECREF (&TopologyType);
Py_DECREF (&PartlistType);
Py_DECREF (&ParttableType);
Py_DECREF (&PartitionType);
Py_DECREF (module);
return NULL;
}
Py_INCREF (&CacheType);
if (PyModule_AddObject (module, "Cache", (PyObject *) &CacheType) < 0) {
Py_DECREF (&ProbeType);
Py_DECREF (&TopologyType);
Py_DECREF (&PartlistType);
Py_DECREF (&ParttableType);
Py_DECREF (&PartitionType);
Py_DECREF (&CacheType);
Py_DECREF (module);
return NULL;
}
Py_INCREF (&DeviceType);
if (PyModule_AddObject (module, "Device", (PyObject *) &DeviceType) < 0) {
Py_DECREF (&ProbeType);
Py_DECREF (&TopologyType);
Py_DECREF (&PartlistType);
Py_DECREF (&ParttableType);
Py_DECREF (&PartitionType);
Py_DECREF (&CacheType);
Py_DECREF (&DeviceType);
Py_DECREF (module);
return NULL;
}
return module;
}

View File

@ -0,0 +1,25 @@
/*
* Copyright (C) 2020 Red Hat, Inc.
*
* This library is free software; you can redistribute it and/or
* modify it under the terms of the GNU Lesser General Public
* License as published by the Free Software Foundation; either
* version 2.1 of the License, or (at your option) any later version.
*
* This library is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
* Lesser General Public License for more details.
*
* You should have received a copy of the GNU Lesser General Public
* License along with this library; if not, see <http://www.gnu.org/licenses/>.
*
*/
#ifndef PYBLKID_H
#define PYBLKID_H
#include <Python.h>
extern PyMODINIT_FUNC PyInit_blkid (void);
#endif /* PYBLKID_H */

View File

@ -0,0 +1,141 @@
/*
* Copyright (C) 2020 Red Hat, Inc.
*
* This library is free software; you can redistribute it and/or
* modify it under the terms of the GNU Lesser General Public
* License as published by the Free Software Foundation; either
* version 2.1 of the License, or (at your option) any later version.
*
* This library is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
* Lesser General Public License for more details.
*
* You should have received a copy of the GNU Lesser General Public
* License along with this library; if not, see <http://www.gnu.org/licenses/>.
*
*/
#include "topology.h"
#include <blkid/blkid.h>
#define UNUSED __attribute__((unused))
PyObject *Topology_new (PyTypeObject *type, PyObject *args UNUSED, PyObject *kwargs UNUSED) {
TopologyObject *self = (TopologyObject*) type->tp_alloc (type, 0);
return (PyObject *) self;
}
int Topology_init (TopologyObject *self UNUSED, PyObject *args UNUSED, PyObject *kwargs UNUSED) {
return 0;
}
void Topology_dealloc (TopologyObject *self) {
Py_TYPE (self)->tp_free ((PyObject *) self);
}
PyObject *_Topology_get_topology_object (blkid_probe probe) {
TopologyObject *result = NULL;
blkid_topology topology = NULL;
if (!probe) {
PyErr_SetString(PyExc_RuntimeError, "internal error");
return NULL;
}
topology = blkid_probe_get_topology (probe);
if (!topology) {
PyErr_SetString (PyExc_RuntimeError, "Failed to get topology");
return NULL;
}
result = PyObject_New (TopologyObject, &TopologyType);
if (!result) {
PyErr_SetString (PyExc_MemoryError, "Failed to create a new Topology object");
return NULL;
}
Py_INCREF (result);
result->topology = topology;
return (PyObject *) result;
}
static PyObject *Topology_get_alignment_offset (TopologyObject *self, PyObject *Py_UNUSED (ignored)) {
unsigned long alignment_offset = blkid_topology_get_alignment_offset (self->topology);
return PyLong_FromUnsignedLong (alignment_offset);
}
static PyObject *Topology_get_logical_sector_size (TopologyObject *self, PyObject *Py_UNUSED (ignored)) {
unsigned long logical_sector_size = blkid_topology_get_logical_sector_size (self->topology);
return PyLong_FromUnsignedLong (logical_sector_size);
}
static PyObject *Topology_get_minimum_io_size (TopologyObject *self, PyObject *Py_UNUSED (ignored)) {
unsigned long minimum_io_size = blkid_topology_get_minimum_io_size (self->topology);
return PyLong_FromUnsignedLong (minimum_io_size);
}
static PyObject *Topology_get_optimal_io_size (TopologyObject *self, PyObject *Py_UNUSED (ignored)) {
unsigned long optimal_io_size = blkid_topology_get_optimal_io_size (self->topology);
return PyLong_FromUnsignedLong (optimal_io_size);
}
static PyObject *Topology_get_physical_sector_size (TopologyObject *self, PyObject *Py_UNUSED (ignored)) {
unsigned long physical_sector_size = blkid_topology_get_physical_sector_size (self->topology);
return PyLong_FromUnsignedLong (physical_sector_size);
}
#ifdef HAVE_BLKID_2_36
static PyObject *Topology_get_dax (TopologyObject *self, PyObject *Py_UNUSED (ignored)) {
int dax = blkid_topology_get_dax (self->topology);
if (dax == 1)
Py_RETURN_TRUE;
else
Py_RETURN_FALSE;
}
#endif
#ifdef HAVE_BLKID_2_39
static PyObject *Topology_get_diskseq (TopologyObject *self, PyObject *Py_UNUSED (ignored)) {
uint64_t diskseq = blkid_topology_get_diskseq (self->topology);
return PyLong_FromUnsignedLongLong (diskseq);
}
#endif
static PyGetSetDef Topology_getseters[] = {
{"alignment_offset", (getter) Topology_get_alignment_offset, NULL, "alignment offset in bytes or 0", NULL},
{"logical_sector_size", (getter) Topology_get_logical_sector_size, NULL, "logical sector size (BLKSSZGET ioctl) in bytes or 0", NULL},
{"minimum_io_size", (getter) Topology_get_minimum_io_size, NULL, "minimum io size in bytes or 0", NULL},
{"optimal_io_size", (getter) Topology_get_optimal_io_size, NULL, "optimal io size in bytes or 0", NULL},
{"physical_sector_size", (getter) Topology_get_physical_sector_size, NULL, "logical sector size (BLKSSZGET ioctl) in bytes or 0", NULL},
#ifdef HAVE_BLKID_2_36
{"dax", (getter) Topology_get_dax, NULL, "whether DAX is supported or not", NULL},
#endif
#ifdef HAVE_BLKID_2_39
{"diskseq", (getter) Topology_get_diskseq, NULL, "disk sequence number", NULL},
#endif
{NULL, NULL, NULL, NULL, NULL}
};
PyTypeObject TopologyType = {
PyVarObject_HEAD_INIT (NULL, 0)
.tp_name = "blkid.Topology",
.tp_basicsize = sizeof (TopologyObject),
.tp_itemsize = 0,
.tp_flags = Py_TPFLAGS_DEFAULT,
.tp_new = Topology_new,
.tp_dealloc = (destructor) Topology_dealloc,
.tp_init = (initproc) Topology_init,
.tp_getset = Topology_getseters,
};

View File

@ -0,0 +1,38 @@
/*
* Copyright (C) 2020 Red Hat, Inc.
*
* This library is free software; you can redistribute it and/or
* modify it under the terms of the GNU Lesser General Public
* License as published by the Free Software Foundation; either
* version 2.1 of the License, or (at your option) any later version.
*
* This library is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
* Lesser General Public License for more details.
*
* You should have received a copy of the GNU Lesser General Public
* License along with this library; if not, see <http://www.gnu.org/licenses/>.
*
*/
#ifndef TOPOLOGY_H
#define TOPOLOGY_H
#include <Python.h>
#include <blkid/blkid.h>
typedef struct {
PyObject_HEAD
blkid_topology topology;
} TopologyObject;
extern PyTypeObject TopologyType;
PyObject *Topology_new (PyTypeObject *type, PyObject *args, PyObject *kwargs);
int Topology_init (TopologyObject *self, PyObject *args, PyObject *kwargs);
void Topology_dealloc (TopologyObject *self);
PyObject *_Topology_get_topology_object (blkid_probe probe);
#endif /* TOPOLOGY_H */

Binary file not shown.

Binary file not shown.

View File

@ -0,0 +1,98 @@
import os
import unittest
from . import utils
import blkid
class BlkidTestCase(unittest.TestCase):
test_image = "test.img.xz"
loop_dev = None
@classmethod
def setUpClass(cls):
test_dir = os.path.abspath(os.path.dirname(__file__))
cls.loop_dev = utils.loop_setup(os.path.join(test_dir, cls.test_image))
@classmethod
def tearDownClass(cls):
if cls.loop_dev:
utils.loop_teardown(cls.loop_dev)
def test_blkid(self):
self.assertTrue(blkid.known_fstype("ext4"))
self.assertFalse(blkid.known_fstype("not-a-filesystem"))
self.assertTrue(blkid.known_pttype("dos"))
self.assertFalse(blkid.known_fstype("not-a-partition-table"))
self.assertEqual(blkid.parse_version_string("2.16.0"), 2160)
code, version, date = blkid.get_library_version()
self.assertGreater(code, 0)
self.assertIsNotNone(version)
self.assertIsNotNone(date)
ttype, tvalue = blkid.parse_tag_string("NAME=test")
self.assertEqual(ttype, "NAME")
self.assertEqual(tvalue, "test")
size = blkid.get_dev_size(self.loop_dev)
self.assertEqual(size, 2097152) # test.img is 2 MiB
# dos should be always supported so we can use it here to test
if hasattr(blkid, "partition_types"):
types = blkid.partition_types()
self.assertIn("dos", types)
# ext4 should be always supported so we can use it here to test
supers = blkid.superblocks()
self.assertIn("ext4", supers)
def test_uevent(self):
with self.assertRaises(RuntimeError):
blkid.send_uevent("not-a-device", "change")
blkid.send_uevent(self.loop_dev, "change")
def test_devname(self):
sysfs_path = "/sys/block/%s/dev" % os.path.basename(self.loop_dev)
major_minor = utils.read_file(sysfs_path).strip()
major, minor = major_minor.split(":")
devno = os.makedev(int(major), int(minor))
devpath = blkid.devno_to_devname(devno)
self.assertEqual(devpath, self.loop_dev)
# we don't have a partition so let's just ask for the disk name and devno
(dname, ddevno) = blkid.devno_to_wholedisk(devno)
self.assertEqual(dname, os.path.basename(self.loop_dev))
self.assertEqual(ddevno, devno)
def test_safe_encode_string(self):
string = "aaaaaa"
safe_string = blkid.safe_string(string)
encoded_string = blkid.encode_string(string)
self.assertEqual(string, safe_string)
self.assertEqual(string, encoded_string)
string = "aa aaa"
safe_string = blkid.safe_string(string)
encoded_string = blkid.encode_string(string)
self.assertEqual(safe_string, "aa_aaa")
self.assertEqual(encoded_string, "aa\\x20aaa")
def test_tags(self):
device = blkid.evaluate_tag("LABEL", "test-ext3")
self.assertEqual(device, self.loop_dev)
device = blkid.evaluate_tag("LABEL", "definitely-not-a-valid-label")
self.assertIsNone(device)
device = blkid.evaluate_spec("LABEL=test-ext3")
self.assertEqual(device, self.loop_dev)
device = blkid.evaluate_spec("LABEL=definitely-not-a-valid-label")
self.assertIsNone(device)

View File

@ -0,0 +1,68 @@
import os
import unittest
import tempfile
from . import utils
import blkid
@unittest.skipUnless(os.geteuid() == 0, "requires root access")
class CacheTestCase(unittest.TestCase):
test_image = "test.img.xz"
loop_dev = None
cache_file = None
@classmethod
def setUpClass(cls):
test_dir = os.path.abspath(os.path.dirname(__file__))
cls.loop_dev = utils.loop_setup(os.path.join(test_dir, cls.test_image))
_, cls.cache_file = tempfile.mkstemp()
@classmethod
def tearDownClass(cls):
if cls.loop_dev:
utils.loop_teardown(cls.loop_dev)
if cls.cache_file:
os.remove(cls.cache_file)
def test_cache(self):
cache = blkid.Cache(filename=self.cache_file)
cache.probe_all()
cache.probe_all(removable=True)
cache.gc()
device = cache.get_device(self.loop_dev)
self.assertIsNotNone(device)
self.assertEqual(device.devname, self.loop_dev)
device = cache.find_device("LABEL", "not-in-cache")
self.assertIsNone(device)
device = cache.find_device("LABEL", "test-ext3")
self.assertIsNotNone(device)
self.assertEqual(device.devname, self.loop_dev)
self.assertIsNotNone(device.tags)
self.assertIn("UUID", device.tags.keys())
self.assertEqual(device.tags["UUID"], "35f66dab-477e-4090-a872-95ee0e493ad6")
self.assertIn("LABEL", device.tags.keys())
self.assertEqual(device.tags["LABEL"], "test-ext3")
self.assertIn("TYPE", device.tags.keys())
self.assertEqual(device.tags["TYPE"], "ext3")
self.assertTrue(cache.devices)
self.assertIn(self.loop_dev, [d.devname for d in cache.devices])
device.verify()
self.assertIsNotNone(device)
self.assertEqual(device.devname, self.loop_dev)
# we don't have new devices, so just a sanity check
cache.probe_all(new_only=True)
if __name__ == "__main__":
unittest.main()

View File

@ -0,0 +1,140 @@
import os
import unittest
from . import utils
import blkid
@unittest.skipUnless(os.geteuid() == 0, "requires root access")
class PartitionsTestCase(unittest.TestCase):
test_image = "gpt.img.xz"
loop_dev = None
@classmethod
def setUpClass(cls):
test_dir = os.path.abspath(os.path.dirname(__file__))
cls.loop_dev = utils.loop_setup(os.path.join(test_dir, cls.test_image))
@classmethod
def tearDownClass(cls):
if cls.loop_dev:
utils.loop_teardown(cls.loop_dev)
def test_partlist(self):
pr = blkid.Probe()
pr.set_device(self.loop_dev)
pr.enable_partitions(True)
ret = pr.do_safeprobe()
self.assertTrue(ret)
plist = pr.partitions
self.assertEqual(plist.numof_partitions, 5)
def test_partitions_filter(self):
pr = blkid.Probe()
pr.set_device(self.loop_dev)
pr.enable_partitions(True)
ret = pr.do_safeprobe()
self.assertTrue(ret)
self.assertEqual(pr.partitions.numof_partitions, 5)
pr.filter_partitions_type(blkid.FLTR_ONLYIN, ["gpt"])
ret = pr.do_safeprobe()
self.assertTrue(ret)
self.assertEqual(pr.partitions.numof_partitions, 5)
pr.filter_partitions_type(blkid.FLTR_ONLYIN, ["gpt", "dos"])
ret = pr.do_safeprobe()
self.assertTrue(ret)
self.assertEqual(pr.partitions.numof_partitions, 5)
pr.filter_partitions_type(blkid.FLTR_NOTIN, ["gpt"])
ret = pr.do_safeprobe()
self.assertFalse(ret)
with self.assertRaises(RuntimeError):
pr.partitions
pr.invert_partitions_filter()
ret = pr.do_safeprobe()
self.assertTrue(ret)
self.assertEqual(pr.partitions.numof_partitions, 5)
pr.filter_partitions_type(blkid.FLTR_NOTIN, ["gpt"])
ret = pr.do_safeprobe()
self.assertFalse(ret)
with self.assertRaises(RuntimeError):
pr.partitions
pr.reset_partitions_filter()
ret = pr.do_safeprobe()
self.assertTrue(ret)
self.assertEqual(pr.partitions.numof_partitions, 5)
def test_partition_table(self):
pr = blkid.Probe()
pr.set_device(self.loop_dev)
pr.enable_partitions(True)
ret = pr.do_safeprobe()
self.assertTrue(ret)
self.assertIsNotNone(pr.partitions)
self.assertIsNotNone(pr.partitions.table)
self.assertEqual(pr.partitions.table.type, "gpt")
self.assertEqual(pr.partitions.table.id, "dd27f98d-7519-4c9e-8041-f2bfa7b1ef61")
self.assertEqual(pr.partitions.table.offset, 512)
nested = pr.partitions.table.get_parent()
self.assertIsNone(nested)
def test_partition(self):
pr = blkid.Probe()
pr.set_device(self.loop_dev)
pr.enable_partitions(True)
ret = pr.do_safeprobe()
self.assertTrue(ret)
self.assertIsNotNone(pr.partitions)
part = pr.partitions.get_partition(0)
self.assertEqual(part.type, 0)
self.assertEqual(part.type_string, "ebd0a0a2-b9e5-4433-87c0-68b6b72699c7")
self.assertEqual(part.uuid, "1dcf10bc-637e-4c52-8203-087ae10a820b")
self.assertTrue(part.is_primary)
self.assertFalse(part.is_extended)
self.assertFalse(part.is_logical)
self.assertEqual(part.name, "ThisIsName")
self.assertEqual(part.flags, 0)
self.assertEqual(part.partno, 1)
self.assertEqual(part.start, 34)
self.assertEqual(part.size, 2014)
if not hasattr(pr.partitions, "get_partition_by_partno"):
return
part = pr.partitions.get_partition_by_partno(1)
self.assertEqual(part.uuid, "1dcf10bc-637e-4c52-8203-087ae10a820b")
# no nested partition table here, just the gpt
table = part.table
self.assertEqual(table.type, "gpt")
# devno_to_part
disk_name = os.path.basename(self.loop_dev)
sysfs_path = "/sys/block/%s/%s/dev" % (disk_name, disk_name + "p" + str(part.partno))
major_minor = utils.read_file(sysfs_path).strip()
major, minor = major_minor.split(":")
devno = os.makedev(int(major), int(minor))
part = pr.partitions.devno_to_partition(devno)
self.assertEqual(part.uuid, "1dcf10bc-637e-4c52-8203-087ae10a820b")

View File

@ -0,0 +1,292 @@
import os
import unittest
from . import utils
import blkid
@unittest.skipUnless(os.geteuid() == 0, "requires root access")
class ProbeTestCase(unittest.TestCase):
test_image = "test.img.xz"
loop_dev = None
@classmethod
def setUpClass(cls):
test_dir = os.path.abspath(os.path.dirname(__file__))
cls.loop_dev = utils.loop_setup(os.path.join(test_dir, cls.test_image))
cls.ver_code, _version, _date = blkid.get_library_version()
@classmethod
def tearDownClass(cls):
if cls.loop_dev:
utils.loop_teardown(cls.loop_dev)
def test_probe(self):
pr = blkid.Probe()
pr.set_device(self.loop_dev)
self.assertEqual(pr.offset, 0)
self.assertEqual(pr.sectors, 4096)
self.assertEqual(pr.sector_size, 512)
self.assertEqual(pr.size, pr.sectors * pr.sector_size)
self.assertGreater(pr.fd, 0)
self.assertNotEqual(pr.devno, 0)
self.assertNotEqual(pr.wholedisk_devno, 0)
self.assertTrue(pr.is_wholedisk)
if self.ver_code >= 2300:
pr.sector_size = 4096
self.assertEqual(pr.sector_size, 4096)
else:
with self.assertRaises(AttributeError):
pr.sector_size = 4096
pr.reset_probe()
def test_probing(self):
pr = blkid.Probe()
with self.assertRaises(ValueError):
pr.do_probe()
pr.set_device(self.loop_dev)
pr.enable_superblocks(True)
pr.set_superblocks_flags(blkid.SUBLKS_TYPE | blkid.SUBLKS_USAGE | blkid.SUBLKS_MAGIC)
ret = pr.do_probe()
self.assertTrue(ret)
usage = pr.lookup_value("USAGE")
self.assertEqual(usage, b"filesystem")
pr.step_back()
ret = pr.do_probe()
self.assertTrue(ret)
usage = pr.lookup_value("USAGE")
self.assertEqual(usage, b"filesystem")
if hasattr(pr, "reset_buffers"):
pr.reset_buffers()
pr.step_back()
ret = pr.do_probe()
self.assertTrue(ret)
usage = pr.lookup_value("USAGE")
self.assertEqual(usage, b"filesystem")
if hasattr(pr, "hide_range"):
offset = pr.lookup_value("SBMAGIC_OFFSET")
magic = pr.lookup_value("SBMAGIC")
pr.hide_range(int(offset), len(magic))
pr.step_back()
ret = pr.do_probe()
self.assertFalse(ret)
with self.assertRaises(RuntimeError):
usage = pr.lookup_value("USAGE")
def test_safe_probing(self):
pr = blkid.Probe()
pr.set_device(self.loop_dev)
pr.enable_superblocks(True)
pr.set_superblocks_flags(blkid.SUBLKS_TYPE | blkid.SUBLKS_USAGE | blkid.SUBLKS_UUID)
# not probed yet, len should be 0
self.assertEqual(len(pr), 0)
self.assertFalse(pr.keys())
self.assertFalse(pr.values())
self.assertFalse(pr.items())
ret = pr.do_safeprobe()
self.assertTrue(ret)
# three or more items should be in the probe now
self.assertGreaterEqual(len(pr), 3)
usage = pr.lookup_value("USAGE")
self.assertEqual(usage, b"filesystem")
usage = pr["USAGE"]
self.assertEqual(usage, b"filesystem")
fstype = pr.lookup_value("TYPE")
self.assertEqual(fstype, b"ext3")
fstype = pr["TYPE"]
self.assertEqual(fstype, b"ext3")
fsuuid = pr.lookup_value("UUID")
self.assertEqual(fsuuid, b"35f66dab-477e-4090-a872-95ee0e493ad6")
fsuuid = pr["UUID"]
self.assertEqual(fsuuid, b"35f66dab-477e-4090-a872-95ee0e493ad6")
keys = pr.keys()
self.assertIn("USAGE", keys)
self.assertIn("TYPE", keys)
self.assertIn("UUID", keys)
values = pr.values()
self.assertIn("filesystem", values)
self.assertIn("ext3", values)
self.assertIn("35f66dab-477e-4090-a872-95ee0e493ad6", values)
items = pr.items()
self.assertIn(("USAGE", "filesystem"), items)
self.assertIn(("TYPE", "ext3"), items)
self.assertIn(("UUID", "35f66dab-477e-4090-a872-95ee0e493ad6"), items)
def test_probe_filter_type(self):
pr = blkid.Probe()
pr.set_device(self.loop_dev)
pr.enable_superblocks(True)
pr.set_superblocks_flags(blkid.SUBLKS_TYPE | blkid.SUBLKS_USAGE | blkid.SUBLKS_UUID)
pr.filter_superblocks_type(blkid.FLTR_ONLYIN, ["ext3", "ext4"])
ret = pr.do_safeprobe()
self.assertTrue(ret)
fstype = pr.lookup_value("TYPE")
self.assertEqual(fstype, b"ext3")
pr.filter_superblocks_type(blkid.FLTR_NOTIN, ["ext3", "ext4"])
ret = pr.do_safeprobe()
self.assertFalse(ret)
with self.assertRaises(RuntimeError):
fstype = pr.lookup_value("TYPE")
pr.filter_superblocks_type(blkid.FLTR_NOTIN, ["vfat", "ntfs"])
ret = pr.do_safeprobe()
self.assertTrue(ret)
fstype = pr.lookup_value("TYPE")
self.assertEqual(fstype, b"ext3")
# invert the filter
pr.invert_superblocks_filter()
ret = pr.do_safeprobe()
self.assertFalse(ret)
with self.assertRaises(RuntimeError):
fstype = pr.lookup_value("TYPE")
# reset to default
pr.reset_superblocks_filter()
ret = pr.do_safeprobe()
self.assertTrue(ret)
fstype = pr.lookup_value("TYPE")
self.assertEqual(fstype, b"ext3")
def test_probe_filter_usage(self):
pr = blkid.Probe()
pr.set_device(self.loop_dev)
pr.enable_superblocks(True)
pr.set_superblocks_flags(blkid.SUBLKS_TYPE | blkid.SUBLKS_USAGE | blkid.SUBLKS_UUID)
pr.filter_superblocks_usage(blkid.FLTR_ONLYIN, blkid.USAGE_FILESYSTEM)
pr.do_safeprobe()
usage = pr.lookup_value("USAGE")
self.assertEqual(usage, b"filesystem")
pr.filter_superblocks_usage(blkid.FLTR_NOTIN, blkid.USAGE_FILESYSTEM | blkid.USAGE_CRYPTO)
pr.do_safeprobe()
with self.assertRaises(RuntimeError):
usage = pr.lookup_value("USAGE")
pr.filter_superblocks_usage(blkid.FLTR_NOTIN, blkid.USAGE_RAID | blkid.USAGE_CRYPTO)
pr.do_safeprobe()
usage = pr.lookup_value("USAGE")
self.assertEqual(usage, b"filesystem")
# invert the filter
pr.invert_superblocks_filter()
pr.do_safeprobe()
with self.assertRaises(RuntimeError):
usage = pr.lookup_value("USAGE")
# reset to default
pr.reset_superblocks_filter()
pr.do_safeprobe()
usage = pr.lookup_value("USAGE")
self.assertEqual(usage, b"filesystem")
def test_topology(self):
pr = blkid.Probe()
pr.set_device(self.loop_dev)
pr.enable_superblocks(True)
pr.set_superblocks_flags(blkid.SUBLKS_TYPE | blkid.SUBLKS_USAGE | blkid.SUBLKS_UUID)
pr.enable_topology(True)
ret = pr.do_safeprobe()
self.assertTrue(ret)
self.assertIsNotNone(pr.topology)
self.assertEqual(pr.topology.alignment_offset, 0)
self.assertEqual(pr.topology.logical_sector_size, 512)
self.assertEqual(pr.topology.minimum_io_size, 512)
self.assertEqual(pr.topology.optimal_io_size, 0)
self.assertEqual(pr.topology.physical_sector_size, 512)
if self.ver_code >= 2360:
self.assertFalse(pr.topology.dax)
else:
with self.assertRaises(AttributeError):
self.assertIsNone(pr.topology.dax)
@unittest.skipUnless(os.geteuid() == 0, "requires root access")
class WipeTestCase(unittest.TestCase):
test_image = "test.img.xz"
loop_dev = None
def setUp(self):
test_dir = os.path.abspath(os.path.dirname(__file__))
self.loop_dev = utils.loop_setup(os.path.join(test_dir, self.test_image))
def tearDown(self):
test_dir = os.path.abspath(os.path.dirname(__file__))
if self.loop_dev:
utils.loop_teardown(self.loop_dev,
filename=os.path.join(test_dir, self.test_image))
def test_wipe(self):
pr = blkid.Probe()
pr.set_device(self.loop_dev, flags=os.O_RDWR)
pr.enable_superblocks(True)
pr.set_superblocks_flags(blkid.SUBLKS_TYPE | blkid.SUBLKS_USAGE | blkid.SUBLKS_MAGIC)
while pr.do_probe():
pr.do_wipe(False)
pr.reset_probe()
ret = pr.do_probe()
self.assertFalse(ret)
if __name__ == "__main__":
unittest.main()

View File

@ -0,0 +1,43 @@
import os
import subprocess
def run_command(command):
res = subprocess.Popen(command, shell=True, stdout=subprocess.PIPE,
stderr=subprocess.PIPE)
out, err = res.communicate()
if res.returncode != 0:
output = out.decode().strip() + "\n\n" + err.decode().strip()
else:
output = out.decode().strip()
return (res.returncode, output)
def read_file(filename):
with open(filename, "r") as f:
content = f.read()
return content
def loop_setup(filename):
if filename.endswith(".xz") and not os.path.exists(filename[:-3]):
ret, out = run_command("xz --decompress --keep %s" % filename)
if ret != 0:
raise RuntimeError("Failed to decompress file %s: %s" % (filename, out))
filename = filename[:-3]
ret, out = run_command("losetup --show --partscan -f %s" % filename)
if ret != 0:
raise RuntimeError("Failed to create loop device from %s: %s" % (filename, out))
return out
def loop_teardown(loopdev, filename=None):
ret, out = run_command("losetup -d %s" % loopdev)
if ret != 0:
raise RuntimeError("Failed to detach loop device %s: %s" % (loopdev, out))
# remove the extracted test file
if filename and filename.endswith(".xz") and os.path.exists(filename[:-3]):
os.remove(filename[:-3])