Compare commits

..

93 Commits

Author SHA1 Message Date
Vadim vtroshchinskiy 47c055cfe0 refs #2458 Add ssh key 2025-07-14 11:33:45 +02:00
Vadim vtroshchinskiy 34069d82cc refs #2364 commit correct post-install hook 2025-07-14 11:02:42 +02:00
Vadim vtroshchinskiy 231fb113a9 refs #2424 - moved to clone engine 2025-07-08 13:00:25 +02:00
Vadim vtroshchinskiy 2a0a0def26 refs #2456 - moved to clone engine 2025-07-08 11:50:50 +02:00
Vadim vtroshchinskiy c68caaa74c refs #2428 Remove obsolete api_server directory 2025-07-07 12:58:16 +02:00
Vadim vtroshchinskiy cc804b38a1 refs #2428 Remove obsolete api directory 2025-07-07 12:57:52 +02:00
Vadim vtroshchinskiy 5f2ac38564 Update gitignore 2025-07-07 12:56:43 +02:00
Vadim vtroshchinskiy cb059a65ef refs #2371 Improve commit names 2025-07-04 13:28:03 +02:00
Vadim vtroshchinskiy a56cb7a160 refs #2364 update changelog 2025-07-01 16:55:45 +02:00
Vadim vtroshchinskiy 419bd5885a refs #2364 Fix typos in templates 2025-07-01 15:33:11 +02:00
Vadim vtroshchinskiy de856dbb79 refs #2364 Update to LTS 2025-07-01 15:32:39 +02:00
Vadim vtroshchinskiy 5c04e5ae5b refs #2364: Drastically reduce amount of needed dependencies 2025-07-01 12:21:53 +02:00
Vadim vtroshchinskiy 749b1fee8d ref #2349 -- corregir error de sfdisk en disco no GPT 2025-06-27 13:51:54 +02:00
Vadim vtroshchinskiy eecb19bef6 ref #2247: Improve progress reporting 2025-06-27 13:23:35 +02:00
Vadim vtroshchinskiy 1a2bf3ccf2 ref #2346: Use the general log files 2025-06-27 13:22:55 +02:00
Vadim vtroshchinskiy a5d5f4df1e Move into a module 2025-06-27 09:56:24 +02:00
Vadim vtroshchinskiy d8d982f95d Add modification endpoint 2025-06-27 09:18:43 +02:00
Vadim vtroshchinskiy 521ee62aa1 Implement web progress 2025-06-27 09:18:32 +02:00
Vadim vtroshchinskiy 1357023bc9 Extra comments 2025-06-24 16:30:46 +02:00
Vadim vtroshchinskiy bbcee84ebd Improve error checking 2025-06-24 16:30:32 +02:00
Vadim vtroshchinskiy 6a904ee7eb Stop ssh from asking for a password on the terminal.
Auth has to work through SSH keys always.
2025-06-24 16:29:13 +02:00
Vadim vtroshchinskiy ac280fbdce Restore progress reporting, it's been fixed 2025-06-23 15:59:52 +02:00
Vadim vtroshchinskiy 50e5c57b71 Add missing restoration script 2025-06-18 13:32:35 +02:00
Vadim vtroshchinskiy 043828c6bc Create tags 2025-06-18 13:32:14 +02:00
Vadim vtroshchinskiy ba9accc8de Use Python libraries 2025-06-18 13:31:46 +02:00
Vadim vtroshchinskiy 83208084b1 Support tag creation 2025-06-18 13:27:25 +02:00
Vadim vtroshchinskiy 5472f4919a Improve logging, disable progress bar by default 2025-06-17 14:57:53 +02:00
Vadim vtroshchinskiy 3797aac848 Fix logging, startup library issues 2025-06-17 14:53:40 +02:00
Vadim vtroshchinskiy d6c7f8a979 Fix port in templates 2025-06-16 23:42:41 +02:00
Vadim vtroshchinskiy 84a2c52f11 Update changelog 2025-06-16 21:23:34 +00:00
Vadim vtroshchinskiy 9194bf94bb Update changelog 2025-06-16 21:22:10 +00:00
Vadim vtroshchinskiy 5c9d2eac84 Update changelog 2025-06-16 21:21:23 +00:00
Vadim vtroshchinskiy abbc57d4fa Slightly improve API for ogrepo usability 2025-06-16 12:07:31 +02:00
Vadim vtroshchinskiy 83dba76e43 Add git image creation script 2025-06-16 12:07:31 +02:00
Vadim vtroshchinskiy 5eb09c7a1b Add package files 2025-06-06 09:58:00 +02:00
Vadim vtroshchinskiy e626e8f776 Update changelog 2025-06-05 21:46:30 +00:00
Vadim vtroshchinskiy fe2846099c Update changelog 2025-06-05 10:15:31 +02:00
Vadim vtroshchinskiy f981269561 Fix ini path 2025-06-05 09:42:51 +02:00
Vadim vtroshchinskiy c4d9101f2b Fix permission problem 2025-06-04 23:34:11 +02:00
Vadim vtroshchinskiy ebea7af520 Disable tests 2025-06-04 23:21:37 +02:00
Vadim vtroshchinskiy 0ecb4a0aff Add templates 2025-05-22 09:14:04 +02:00
Vadim vtroshchinskiy 9be76a112f Rename service 2025-04-30 10:39:39 +02:00
Vadim vtroshchinskiy 6662e270be Add missing file 2025-04-30 10:27:12 +02:00
Vadim vtroshchinskiy 442324659c Add branches and tags creation endpoints 2025-04-23 08:43:34 +02:00
Vadim vtroshchinskiy 5b739a1c61 Debian packaging 2025-04-16 00:00:13 +02:00
Vadim vtroshchinskiy bbdfed4cc6 Fixes for running under gunicorn 2025-04-15 23:59:07 +02:00
Vadim vtroshchinskiy 1d1f2caab8 Fix post-install for forgejo deployment
Handle initializing the forgejo database and reinstall
2025-04-15 08:59:02 +02:00
Vadim vtroshchinskiy 3ef8fe9dcd opengnsys-forgejo package 2025-04-09 09:44:09 +02:00
Vadim vtroshchinskiy 4d0b383839 Refactoring for packaging 2025-04-03 00:04:48 +02:00
Vadim vtroshchinskiy 5bc05c19f1 Remove old code 2025-04-03 00:02:01 +02:00
Vadim vtroshchinskiy ec9f25d9b0 Refactoring for package support 2025-04-02 23:59:50 +02:00
Vadim vtroshchinskiy f2ce7267f1 Fix port argument 2025-04-01 13:12:02 +02:00
Vadim vtroshchinskiy ece688c582 Add helpful script 2025-04-01 11:56:23 +02:00
Vadim vtroshchinskiy d929d961f1 Bump forgejo version 2025-04-01 11:09:16 +02:00
Vadim vtroshchinskiy eee84f7d25 Fix repository URL 2025-03-31 16:13:39 +02:00
Vadim vtroshchinskiy ec2fd05fdf Load swagger from disk 2025-03-31 12:24:33 +02:00
Vadim vtroshchinskiy 13257ce085 Add README 2025-03-31 10:25:57 +02:00
Vadim vtroshchinskiy 1c9737b398 Fix error handling 2025-03-31 10:25:44 +02:00
Vadim vtroshchinskiy ccb5e518e7 Add port argument 2025-03-31 10:25:36 +02:00
Vadim vtroshchinskiy e518a509cd Convert to blueprint 2025-03-25 15:22:21 +01:00
Vadim vtroshchinskiy f6a5699c58 Add original repo_api 2025-03-25 11:55:54 +01:00
Vadim vtroshchinskiy e67b08cea5 Initial version of the API server 2025-03-25 09:45:19 +01:00
Vadim vtroshchinskiy d4ce9c3ee3 Make branch deletion RESTful 2025-02-06 16:22:38 +01:00
Vadim vtroshchinskiy 8bebeb619a Branch deletion 2025-02-06 16:14:17 +01:00
Vadim vtroshchinskiy 115df98905 Log every request 2025-02-06 16:03:23 +01:00
Vadim vtroshchinskiy 5721e56237 Rework the ability to use a custom SSH key
The code wasn't up to date with the Forgejo changes
2025-02-06 15:31:37 +01:00
Vadim vtroshchinskiy 3ebc728fb9 Mark git repo as a safe directory
Fixes problems due to git not liking the ownership
2025-02-06 13:15:21 +01:00
Vadim vtroshchinskiy 46732216eb More error logging 2025-02-06 13:14:53 +01:00
Vadim vtroshchinskiy 1f2095ce1a Improve task management, cleanup when there are too many 2025-02-06 13:14:31 +01:00
Vadim vtroshchinskiy 09baf6d1e8 Fix HTTP exception handling
Using too general of an exception was causing problems.
2025-02-06 09:38:31 +01:00
Vadim vtroshchinskiy 73118501b3 Improvements for logging and error handling 2025-01-29 09:45:26 +01:00
Vadim vtroshchinskiy 14cd2d4363 Change git repo path 2025-01-24 09:49:32 +01:00
Vadim vtroshchinskiy 4ef29e9fca Fix ogrepository paths 2025-01-23 09:59:44 +01:00
Vadim vtroshchinskiy 6491757535 Fix namespaces 2025-01-17 09:50:47 +01:00
Vadim vtroshchinskiy dc59b33e8a Improve installation process, make it possible to extract keys from oglive 2025-01-17 09:49:12 +01:00
Vadim vtroshchinskiy 1d4100dcc0 Update english documentation 2025-01-13 15:56:10 +01:00
Vadim vtroshchinskiy 62b6319845 Restructure git installer to work without ogboot on the same machine, update docs 2025-01-13 15:16:39 +01:00
Vadim vtroshchinskiy a60d93ce03 Reorder and fix for ogrepository reorganization
Still needs a bit of improvement to deal with the case of not being
on the same machine as ogadmin
2025-01-13 09:54:40 +01:00
Vadim vtroshchinskiy 7c83f24b31 Add make_orig script
This downloads and creates the .orig tar gz for debian packaging
2025-01-10 12:56:28 +01:00
Vadim vtroshchinskiy cbbea5ff47 Add pyblkid copyright file 2025-01-10 12:55:56 +01:00
Vadim vtroshchinskiy 26427a67f3 add python libarchive-c original package 2025-01-10 12:55:20 +01:00
Vadim vtroshchinskiy 1bb520b61c Ignore more files 2025-01-10 12:54:54 +01:00
Vadim vtroshchinskiy f05c0e3943 Ignore python cache 2025-01-09 11:59:39 +01:00
Vadim vtroshchinskiy c3c613fdea Update documentation 2024-12-31 01:08:13 +01:00
Vadim vtroshchinskiy 31b15d33a1 Add packages 2024-12-31 01:08:13 +01:00
Vadim vtroshchinskiy 1575934568 Make --pull work like the other commands 2024-12-31 01:08:12 +01:00
Vadim vtroshchinskiy 51a8fb66db Improve repository initialization
Improve performance, better progress display
2024-12-31 01:08:12 +01:00
Vadim vtroshchinskiy 22bbeb0e35 Make unmounting more robust 2024-12-31 01:08:12 +01:00
Vadim vtroshchinskiy bcf376ab82 Make log filename machine-dependent
Move kernel args parsing
2024-12-31 01:08:12 +01:00
Vadim vtroshchinskiy 655dfbb049 Better status reports 2024-12-31 01:08:12 +01:00
Vadim vtroshchinskiy 6e09f5095e Add extra mounts update 2024-12-31 01:08:12 +01:00
Vadim vtroshchinskiy 9f6b7e25f9 Constants 2024-12-31 01:08:06 +01:00
Vadim vtroshchinskiy 6997dfeeb6 Use tqdm 2024-12-31 01:07:29 +01:00
208 changed files with 32708 additions and 4096 deletions

10
.gitignore vendored 100644
View File

@ -0,0 +1,10 @@
__pycache__
.venv
venvog
*.deb
*.build
*.dsc
*.changes
*.buildinfo
*.tar.gz
*-stamp

View File

@ -1,68 +0,0 @@
# GitLib
The `gitapi.py` is an API for OgGit, written in Python/Flask.
It is an HTTP server that receives commands and executes maintenance actions including the creation and deletion of repositories.
# Installing Python dependencies
The conversion of the code to Python 3 currently requires the packages specified in `requirements.txt`.
To install Python dependencies, the `venv` module is used (https://docs.python.org/3/library/venv.html), which installs all dependencies in an environment independent of the system.
# Usage
## Older Distributions (18.04)
sudo apt install -y python3.8 python3.8-venv python3-venv libarchive-dev
python3.8 -m venv venvog
. venvog/bin/activate
python3.8 -m pip install --upgrade pip
pip3 install -r requirements.txt
Run with:
./gitapi.py
## Usage
**Note:** Run as `opengnsys`, as it manages the images located in `/opt/opengnsys/images`.
$ . venvog/bin/activate
$ ./gitapi.py
# Documentation
Python documentation can be generated using a utility like pdoc3 (there are multiple possible alternatives):
# Install pdoc3
pip install --user pdoc3
# Generate documentation
pdoc3 --force --html opengnsys_git_installer.py
# Operation
## Requirements
The gitapi is designed to run within an existing opengnsys environment. It should be installed in an ogrepository.
## API Example
### Get list of branches
$ curl -L http://localhost:5000/repositories/linux/branches
{
"branches": [
"master"
]
}
### Synchronize with remote repository
curl --header "Content-Type: application/json" --data '{"remote_repository":"foobar"}' -X POST -L http://localhost:5000/repositories/linux/sync

View File

@ -1,69 +0,0 @@
# Git API
La `gitapi.py` es una API para OgGit, escrita en Python/Flask.
Es un servidor HTTP que recibe comandos y ejecuta acciones de mantenimiento incluyendo la creación y eliminación de repositorios.
# Instalación de dependencias para python
La conversion del código a Python 3 requiere actualmente los paquetes especificados en `requirements.txt`
Para instalar dependencias de python se usa el modulo venv (https://docs.python.org/3/library/venv.html) que instala todas las dependencias en un entorno independiente del sistema.
# Uso
## Distribuciones antiguas (18.04)
sudo apt install -y python3.8 python3.8-venv python3-venv libarchive-dev
python3.8 -m venv venvog
. venvog/bin/activate
python3.8 -m pip install --upgrade pip
pip3 install -r requirements.txt
Ejecutar con:
./gitapi.py
## Uso
**Nota:** Ejecutar como `opengnsys`, ya que gestiona las imágenes que se encuentran en `/opt/opengnsys/images`.
$ . venvog/bin/activate
$ ./gitapi.py
# Documentación
Se puede generar documentación de Python con una utilidad como pdoc3 (hay multiples alternativas posibles):
# Instalar pdoc3
pip install --user pdoc3
# Generar documentación
pdoc3 --force --html opengnsys_git_installer.py
# Funcionamiento
## Requisitos
La gitapi esta diseñada para funcionar dentro de un entorno opengnsys existente. Se debe instalar en un ogrepository.
## Ejemplo de API
### Obtener lista de ramas
$ curl -L http://localhost:5000/repositories/linux/branches
{
"branches": [
"master"
]
}
### Sincronizar con repositorio remoto
curl --header "Content-Type: application/json" --data '{"remote_repository":"foobar"}' -X POST -L http://localhost:5000/repositories/linux/sync

View File

@ -1,492 +0,0 @@
#!/usr/bin/env python3
"""
This module provides a Flask-based API for managing Git repositories in the OpenGnsys system.
It includes endpoints for creating, deleting, synchronizing, backing up, and performing garbage
collection on Git repositories. The API also provides endpoints for retrieving repository
information such as the list of repositories and branches, as well as checking the status of
asynchronous tasks.
Classes:
None
Functions:
do_repo_backup(repo, params)
do_repo_sync(repo, params)
do_repo_gc(repo)
home()
get_repositories()
create_repo(repo)
sync_repo(repo)
backup_repository(repo)
gc_repo(repo)
tasks_status(task_id)
delete_repo(repo)
get_repository_branches(repo)
health_check()
Constants:
REPOSITORIES_BASE_PATH (str): The base path where Git repositories are stored.
Global Variables:
app (Flask): The Flask application instance.
executor (Executor): The Flask-Executor instance for managing asynchronous tasks.
tasks (dict): A dictionary to store the status of asynchronous tasks.
"""
# pylint: disable=locally-disabled, line-too-long
import os.path
import os
import shutil
import uuid
import git
import time
from opengnsys_git_installer import OpengnsysGitInstaller
from flask import Flask, request, jsonify # stream_with_context, Response,
from flask_executor import Executor
from flask_restx import Api, Resource, fields
#from flasgger import Swagger
import paramiko
REPOSITORIES_BASE_PATH = "/opt/opengnsys/images"
start_time = time.time()
tasks = {}
# Create an instance of the Flask class
app = Flask(__name__)
api = Api(app,
version='0.50',
title = "OpenGnsys Git API",
description = "API for managing disk images stored in Git",
doc = "/swagger/")
git_ns = api.namespace(name = "oggit", description = "Git operations", path = "/oggit/v1")
executor = Executor(app)
def do_repo_backup(repo, params):
"""
Creates a backup of the specified Git repository and uploads it to a remote server via SFTP.
Args:
repo (str): The name of the repository to back up.
params (dict): A dictionary containing the following keys:
- ssh_server (str): The SSH server address.
- ssh_port (int): The SSH server port.
- ssh_user (str): The SSH username.
- filename (str): The remote filename where the backup will be stored.
Returns:
bool: True if the backup was successful.
"""
gitrepo = git.Repo(f"{REPOSITORIES_BASE_PATH}/{repo}.git")
ssh = paramiko.SSHClient()
ssh.set_missing_host_key_policy(paramiko.AutoAddPolicy())
ssh.connect(params["ssh_server"], params["ssh_port"], params["ssh_user"])
sftp = ssh.open_sftp()
with sftp.file(params["filename"], mode='wb+') as remote_file:
gitrepo.archive(remote_file, format="tar.gz")
return True
def do_repo_sync(repo, params):
"""
Synchronizes a local Git repository with a remote repository.
Args:
repo (str): The name of the local repository to synchronize.
params (dict): A dictionary containing the remote repository URL with the key "remote_repository".
Returns:
list: A list of dictionaries, each containing:
- "local_ref" (str): The name of the local reference.
- "remote_ref" (str): The name of the remote reference.
- "summary" (str): A summary of the push operation for the reference.
"""
gitrepo = git.Repo(f"{REPOSITORIES_BASE_PATH}/{repo}.git")
# Recreate the remote every time, it might change
if "backup" in gitrepo.remotes:
gitrepo.delete_remote("backup")
backup_repo = gitrepo.create_remote("backup", params["remote_repository"])
pushed_references = backup_repo.push("*:*")
results = []
# This gets returned to the API
for ref in pushed_references:
results = results + [ {"local_ref" : ref.local_ref.name, "remote_ref" : ref.remote_ref.name, "summary" : ref.summary }]
return results
def do_repo_gc(repo):
"""
Perform garbage collection on the specified Git repository.
Args:
repo (str): The name of the repository to perform garbage collection on.
Returns:
bool: True if the garbage collection command was executed successfully.
"""
gitrepo = git.Repo(f"{REPOSITORIES_BASE_PATH}/{repo}.git")
gitrepo.git.gc()
return True
# Define a route for the root URL
@api.route('/')
class GitLib(Resource):
@api.doc('home')
def get(self):
"""
Home route that returns a JSON response with a welcome message for the OpenGnsys Git API.
Returns:
Response: A Flask JSON response containing a welcome message.
"""
return {
"message": "OpenGnsys Git API"
}
@git_ns.route('/oggit/v1/repositories')
class GitRepositories(Resource):
def get(self):
"""
Retrieve a list of Git repositories.
This endpoint scans the OpenGnsys image path for directories that
appear to be Git repositories (i.e., they contain a "HEAD" file).
It returns a JSON response containing the names of these repositories.
Returns:
Response: A JSON response with a list of repository names or an
error message if the repository storage is not found.
- 200 OK: When the repositories are successfully retrieved.
- 500 Internal Server Error: When the repository storage is not found.
Example JSON response:
{
"repositories": ["repo1", "repo2"]
}
"""
if not os.path.isdir(REPOSITORIES_BASE_PATH):
return jsonify({"error": "Repository storage not found, git functionality may not be installed."}), 500
repos = []
for entry in os.scandir(REPOSITORIES_BASE_PATH):
if entry.is_dir(follow_symlinks=False) and os.path.isfile(os.path.join(entry.path, "HEAD")):
name = entry.name
if name.endswith(".git"):
name = name[:-4]
repos = repos + [name]
return jsonify({
"repositories": repos
})
def post(self):
"""
Create a new Git repository.
This endpoint creates a new Git repository with the specified name.
If the repository already exists, it returns a status message indicating so.
Args:
repo (str): The name of the repository to be created.
Returns:
Response: A JSON response with a status message and HTTP status code.
- 200: If the repository already exists.
- 201: If the repository is successfully created.
"""
data = request.json
if data is None:
return jsonify({"error" : "Parameters missing"}), 400
repo = data["name"]
repo_path = os.path.join(REPOSITORIES_BASE_PATH, repo + ".git")
if os.path.isdir(repo_path):
return jsonify({"status": "Repository already exists"}), 200
installer = OpengnsysGitInstaller()
installer.add_forgejo_repo(repo)
#installer.init_git_repo(repo + ".git")
return jsonify({"status": "Repository created"}), 201
@git_ns.route('/oggit/v1/repositories/<repo>/sync')
class GitRepoSync(Resource):
def post(self, repo):
"""
Synchronize a repository with a remote repository.
This endpoint triggers the synchronization process for a specified repository.
It expects a JSON payload with the remote repository details.
Args:
repo (str): The name of the repository to be synchronized.
Returns:
Response: A JSON response indicating the status of the synchronization process.
- 200: If the synchronization process has started successfully.
- 400: If the request payload is missing or invalid.
- 404: If the specified repository is not found.
"""
repo_path = os.path.join(REPOSITORIES_BASE_PATH, repo + ".git")
if not os.path.isdir(repo_path):
return jsonify({"error": "Repository not found"}), 404
data = request.json
if data is None:
return jsonify({"error" : "Parameters missing"}), 400
future = executor.submit(do_repo_sync, repo, data)
task_id = str(uuid.uuid4())
tasks[task_id] = future
return jsonify({"status": "started", "task_id" : task_id}), 200
@git_ns.route('/oggit/v1/repositories/<repo>/backup')
class GitRepoBackup(Resource):
def backup_repository(self, repo):
"""
Backup a specified repository.
Endpoint: POST /repositories/<repo>/backup
Args:
repo (str): The name of the repository to back up.
Request Body (JSON):
ssh_port (int, optional): The SSH port to use for the backup. Defaults to 22.
Returns:
Response: A JSON response indicating the status of the backup operation.
- If the repository is not found, returns a 404 error with a message.
- If the request body is missing, returns a 400 error with a message.
- If the backup process starts successfully, returns a 200 status with the task ID.
Notes:
- The repository path is constructed by appending ".git" to the repository name.
- The backup operation is performed asynchronously using a thread pool executor.
- The task ID of the backup operation is generated using UUID and stored in a global tasks dictionary.
"""
repo_path = os.path.join(REPOSITORIES_BASE_PATH, repo + ".git")
if not os.path.isdir(repo_path):
return jsonify({"error": "Repository not found"}), 404
data = request.json
if data is None:
return jsonify({"error" : "Parameters missing"}), 400
if not "ssh_port" in data:
data["ssh_port"] = 22
future = executor.submit(do_repo_backup, repo, data)
task_id = str(uuid.uuid4())
tasks[task_id] = future
return jsonify({"status": "started", "task_id" : task_id}), 200
@git_ns.route('/oggit/v1/repositories/<repo>/compact', methods=['POST'])
class GitRepoCompact(Resource):
def post(self, repo):
"""
Initiates a garbage collection (GC) process for a specified Git repository.
This endpoint triggers an asynchronous GC task for the given repository.
The task is submitted to an executor, and a unique task ID is generated
and returned to the client.
Args:
repo (str): The name of the repository to perform GC on.
Returns:
Response: A JSON response containing the status of the request and
a unique task ID if the repository is found, or an error
message if the repository is not found.
"""
repo_path = os.path.join(REPOSITORIES_BASE_PATH, repo + ".git")
if not os.path.isdir(repo_path):
return jsonify({"error": "Repository not found"}), 404
future = executor.submit(do_repo_gc, repo)
task_id = str(uuid.uuid4())
tasks[task_id] = future
return jsonify({"status": "started", "task_id" : task_id}), 200
@git_ns.route('/oggit/v1/tasks/<task_id>/status')
class GitTaskStatus(Resource):
def get(self, task_id):
"""
Endpoint to check the status of a specific task.
Args:
task_id (str): The unique identifier of the task.
Returns:
Response: A JSON response containing the status of the task.
- If the task is not found, returns a 404 error with an error message.
- If the task is completed, returns a 200 status with the result.
- If the task is still in progress, returns a 202 status indicating the task is in progress.
"""
if not task_id in tasks:
return jsonify({"error": "Task not found"}), 404
future = tasks[task_id]
if future.done():
result = future.result()
return jsonify({"status" : "completed", "result" : result}), 200
else:
return jsonify({"status" : "in progress"}), 202
@git_ns.route('/oggit/v1/repositories/<repo>', methods=['DELETE'])
class GitRepo(Resource):
def delete(self, repo):
"""
Deletes a Git repository.
This endpoint deletes a Git repository specified by the `repo` parameter.
If the repository does not exist, it returns a 404 error with a message
indicating that the repository was not found. If the repository is successfully
deleted, it returns a 200 status with a message indicating that the repository
was deleted.
Args:
repo (str): The name of the repository to delete.
Returns:
Response: A JSON response with a status message and the appropriate HTTP status code.
"""
repo_path = os.path.join(REPOSITORIES_BASE_PATH, repo + ".git")
if not os.path.isdir(repo_path):
return jsonify({"error": "Repository not found"}), 404
shutil.rmtree(repo_path)
return jsonify({"status": "Repository deleted"}), 200
@git_ns.route('/oggit/v1/repositories/<repo>/branches')
class GitRepoBranches(Resource):
def get(self, repo):
"""
Retrieve the list of branches for a given repository.
Args:
repo (str): The name of the repository.
Returns:
Response: A JSON response containing a list of branch names or an error message if the repository is not found.
- 200: A JSON object with a "branches" key containing a list of branch names.
- 404: A JSON object with an "error" key containing the message "Repository not found" if the repository does not exist.
"""
repo_path = os.path.join(REPOSITORIES_BASE_PATH, repo + ".git")
if not os.path.isdir(repo_path):
return jsonify({"error": "Repository not found"}), 404
git_repo = git.Repo(repo_path)
branches = []
for branch in git_repo.branches:
branches = branches + [branch.name]
return jsonify({
"branches": branches
})
@git_ns.route('/health')
class GitHealth(Resource):
def get(self):
"""
Health check endpoint.
This endpoint returns a JSON response indicating the health status of the application.
Returns:
Response: A JSON response with a status key set to "OK". Currently it always returns
a successful value, but this endpoint can still be used to check that the API is
active and functional.
"""
return {
"status": "OK"
}
@git_ns.route('/status')
class GitStatus(Resource):
def get(self):
"""
Status check endpoint.
This endpoint returns a JSON response indicating the status of the application.
Returns:
Response: A JSON response with status information
"""
return {
"uptime" : time.time() - start_time,
"active_tasks" : len(tasks)
}
api.add_namespace(git_ns)
# Run the Flask app
if __name__ == '__main__':
print(f"Map: {app.url_map}")
app.run(debug=True, host='0.0.0.0')

View File

@ -1 +0,0 @@
../installer/opengnsys_git_installer.py

View File

@ -1,34 +0,0 @@
aniso8601==9.0.1
attrs==24.2.0
bcrypt==4.2.0
blinker==1.8.2
cffi==1.17.1
click==8.1.7
cryptography==43.0.1
dataclasses==0.6
flasgger==0.9.7.1
Flask==3.0.3
Flask-Executor==1.0.0
flask-restx==1.3.0
gitdb==4.0.11
GitPython==3.1.43
importlib_resources==6.4.5
itsdangerous==2.2.0
Jinja2==3.1.4
jsonschema==4.23.0
jsonschema-specifications==2024.10.1
libarchive-c==5.1
MarkupSafe==3.0.1
mistune==3.0.2
packaging==24.1
paramiko==3.5.0
pycparser==2.22
PyNaCl==1.5.0
pytz==2024.2
PyYAML==6.0.2
referencing==0.35.1
rpds-py==0.20.0
six==1.16.0
smmap==5.0.1
termcolor==2.5.0
Werkzeug==3.0.4

View File

@ -1,27 +0,0 @@
bcrypt==4.0.1
cffi==1.15.1
click==8.0.4
colorterm==0.3
contextvars==2.4
cryptography==40.0.2
dataclasses==0.8
Flask==2.0.3
Flask-Executor==1.0.0
gitdb==4.0.9
GitPython==3.1.20
immutables==0.19
importlib-metadata==4.8.3
itsdangerous==2.0.1
Jinja2==3.0.3
libarchive==0.4.7
MarkupSafe==2.0.1
nose==1.3.7
paramiko==3.5.0
pkg_resources==0.0.0
pycparser==2.21
PyNaCl==1.5.0
smmap==5.0.0
termcolor==1.1.0
typing_extensions==4.1.1
Werkzeug==2.0.3
zipp==3.6.0

View File

@ -1,122 +0,0 @@
# GitLib
The `gitlib.py` is a Python library also usable as a command-line program for testing purposes.
It contains functions for managing git, and the command-line interface allows executing them without needing to write a program that uses the library.
## Requirements
Gitlib is designed to work within an existing OpenGnsys environment. It invokes some OpenGnsys commands internally and reads the parameters passed to the kernel in oglive.
Therefore, it will not work correctly outside of an oglive environment.
## Installing Python dependencies
The code conversion to Python 3 currently requires the packages specified in `requirements.txt`.
The `venv` module (https://docs.python.org/3/library/venv.html) is used to install Python dependencies, creating an environment isolated from the system.
**Note:** Ubuntu 24.04 includes most of the required dependencies as packages, but there is no `blkid` package, so it must be installed using pip within a virtual environment.
Run the following commands:
```bash
sudo apt install -y python3 libarchive-dev libblkid-dev pkg-config libacl1-dev
python3 -m venv venvog
. venvog/bin/activate
python3 -m pip install --upgrade pip
pip3 install -r requirements.txt
```
# Usage
Run with:
```bash
# . venvog/bin/activate
# ./gitlib.py
```
In command-line mode, help can be displayed with:
```bash
./gitlib.py --help
```
**Note:** Execute as the `root` user, as `sudo` clears the environment variable changes made by venv. This will likely result in a Python module not found error or program failure due to outdated dependencies.
**Note:** Commands starting with `--test` exist for internal testing. They are temporary and meant to test specific parts of the code. These may require specific conditions to work and will be removed upon completion of development.
## Initialize a repository:
```bash
./gitlib.py --init-repo-from /dev/sda2 --repo linux
```
This initializes the 'linux' repository with the content of /mnt/sda2.
`--repo` specifies the name of one of the repositories configured during the git installation (see git installer).
The repository is uploaded to the ogrepository, obtained from the boot parameter passed to the kernel.
## Clone a repository:
```bash
./gitlib.py --clone-repo-to /dev/sda2 --boot-device /dev/sda --repo linux
```
This clones a repository from the ogrepository. The target is a physical device that will be formatted with the necessary file system.
`--boot-device` specifies the boot device where the bootloader (GRUB or similar) will be installed.
`--repo` is the repository name contained in ogrepository.
# Special Considerations for Windows
## Cloning
* Windows must be completely shut down, not hibernated. See: https://learn.microsoft.com/en-us/troubleshoot/windows-client/setup-upgrade-and-drivers/disable-and-re-enable-hibernation
* Windows must be cleanly shut down using "Shut Down". Gitlib may fail to mount a disk from an improperly shut down system. If so, boot Windows again and shut it down properly.
* Disk encryption (BitLocker) cannot be used.
## Restoration
Windows uses a structure called BCD (https://learn.microsoft.com/en-us/windows-hardware/manufacture/desktop/bcd-system-store-settings-for-uefi?view=windows-11) to store boot configuration.
This structure can vary depending on the machine where it is deployed. For this reason, gitlib supports storing multiple versions of the BCD internally and selecting the one corresponding to a specific machine.
# Documentation
Python documentation can be generated using utilities such as `pdoc3` (other alternatives are also possible):
```bash
# Install pdoc3
pip install --user pdoc3
# Generate documentation
pdoc3 --force --html opengnsys_git_installer.py
```
# Functionality
## Metadata
Git cannot store data about extended attributes, sockets, or other special file types. Gitlib stores these in `.opengnsys-metadata` at the root of the repository.
The data is saved in `jsonl` files, a structure with one JSON object per line. This facilitates partial applications by applying only the necessary lines.
The following files are included:
* `acls.jsonl`: ACLs
* `empty_directories.jsonl`: Empty directories, as Git cannot store them
* `filesystems.json`: Information about file systems: types, sizes, UUIDs
* `gitignores.jsonl`: List of .gitignore files (renamed to avoid interfering with Git)
* `metadata.json`: General metadata about the repository
* `special_files.jsonl`: Special files like sockets
* `xattrs.jsonl`: Extended attributes
* `renamed.jsonl`: Files renamed to avoid interfering with Git
* `unix_permissions.jsonl`: UNIX permissions (not precisely stored by Git)
* `ntfs_secaudit.txt`: NTFS security data
* `efi_data`: Copy of the EFI (ESP) partition
* `efi_data.(id)`: EFI partition copy corresponding to a specific machine
* `efi_data.(name)`: EFI partition copy corresponding to a name specified by the administrator.

View File

@ -1,151 +0,0 @@
# GitLib
La `gitlib.py` es una librería de Python también usable como programa de línea
de comandos para pruebas.
Contiene las funciones de gestión de git, y la parte de línea de comandos permite ejecutarlas sin necesitar escribir un programa que use la librería.
## Requisitos
La gitlib esta diseñada para funcionar dentro de un entorno opengnsys existente. Invoca algunos de los comandos de opengnsys internamente, y lee los parámetros pasados al kernel en el oglive.
Por lo tanto, no va a funcionar correctamente fuera de un entorno oglive.
## Instalación de dependencias para python
La conversion del código a Python 3 requiere actualmente los paquetes especificados en `requirements.txt`
Para instalar dependencias de python se usa el modulo venv (https://docs.python.org/3/library/venv.html) que instala todas las dependencias en un entorno independiente del sistema.
**Nota:** Ubuntu 24.04 tiene la mayoría de las dependencias necesarias como paquetes, pero no hay paquete de `blkid`, por lo cual es necesario usar pip y un virtualenv.
Ejecutar:
sudo apt install -y python3 libarchive-dev libblkid-dev pkg-config libacl1-dev
python3 -m venv venvog
. venvog/bin/activate
python3 -m pip install --upgrade pip
pip3 install -r requirements.txt
# Uso
Ejecutar con:
# . venvog/bin/activate
# ./gitlib.py
En modo de linea de comando, hay ayuda que se puede ver con:
./gitlib.py --help
**Nota:** Ejecutar como usuario `root`, ya que `sudo` borra los cambios a las variables de entorno realizadas por venv. El resultado probable es un error de falta de módulos de Python, o un fallo del programa por usar dependencias demasiado antiguas.
**Nota:** Los comandos que comienzan por `--test` existen para hacer pruebas internas, y existen temporalmente para probar partes especificas del código. Es posible que necesiten condiciones especificas para funcionar, y van a eliminarse al completarse el desarrollo.
## Inicializar un repositorio:
./gitlib.py --init-repo-from /dev/sda2 --repo linux
Esto inicializa el repositorio 'linux' con el contenido /mnt/sda2.
`--repo` especifica el nombre de uno de los repositorios fijados durante la instalación de git (ver git installer).
El repositorio de sube al ogrepository, que se obtiene del parámetro de arranque pasado al kernel.
## Clonar un repositorio:
./gitlib.py --clone-repo-to /dev/sda2 --boot-device /dev/sda --repo linux
Esto clona un repositorio del ogrepository. El destino es un dispositivo físico que se va a formatear con el sistema de archivos necesario.
`--boot-device` especifica el dispositivo de arranque donde se va a instalar el bootloader (GRUB o similar)
`--repo` es el nombre de repositorio contenido en ogrepository.
# Consideraciones especiales para Windows
## Clonación
* Windows debe haber sido apagado completamente, sin hibernar. Ver https://learn.microsoft.com/en-us/troubleshoot/windows-client/setup-upgrade-and-drivers/disable-and-re-enable-hibernation
* Windows debe haber sido apagado limpiamente, usando "Apagar sistema". Es posible que gitlib no pueda montar un disco de un sistema apagado incorrectamente. En ese caso hay que volver a iniciar Windows, y apagarlo.
* No se puede usar cifrado de disco (Bitlocker). Es posible desactivarlo: https://answers.microsoft.com/en-us/windows/forum/all/how-to-disable-bitlocker-in-windows-10/fc9e12d6-a8cd-4515-ab8f-379c6409aa56
* Es recomendable ejecutar un proceso de limpieza de disco.
* Es recomendable compactar WinSxS con `Dism.exe /online /Cleanup-Image /StartComponentCleanup /ResetBase`
## Restauración
Windows usa una estructura llamada BCD (https://learn.microsoft.com/en-us/windows-hardware/manufacture/desktop/bcd-system-store-settings-for-uefi?view=windows-11) para almacenar la configuración de arranque.
La estructura puede variar dependiendo en que maquina se despliegue, por esto gitlib soporta almacenar internamente multiples versiones del BCD, y elegir el correspondiente a una maquina especifica.
## Identificadores de disco
El arranque de Windows dependiendo de como esté configurado por Windows puede referirse
a UUIDs de particiones y discos cuando se usa particionado GPT.
El código actual conserva los UUIDs y los restaura al clonar.
## BCDs específicos
Los datos de arranque de Windows se guardan en `.opengsnys-metadata/efi_data`. Es posible incluir versiones adicionales en caso necesario. Se hace creando un directorio adicional con el nombre `efi_data.(id)`, donde id es un número de serie obtenido con el comando `/usr/sbin/dmidecode -s system-uuid`.
Por ejemplo:
```
# Obtener ID único del equipo
dmidecode -s system-uuid
a64cc65b-12a6-42ef-8182-5ae4832e9f19
# Copiar la partición EFI al directorio correspondiente a esa máquina particular
mkdir /mnt/sda3/.opengnsys-metadata/efi_data.a64cc65b-12a6-42ef-8182-5ae4832e9f19
cp -Rdpv /mnt/sda1/* /mnt/sda3/.opengnsys-metadata/efi_data.a64cc65b-12a6-42ef-8182-5ae4832e9f19
# commit
```
Con esto, al desplegar el repo, para la máquina a64cc65b-12a6-42ef-8182-5ae4832e9f19 se va a usar su propia configuración de arranque, en vez de la general.
# Documentación
Se puede generar documentación de Python con una utilidad como pdoc3 (hay multiples alternativas posibles):
# Instalar pdoc3
pip install --user pdoc3
# Generar documentación
pdoc3 --force --html opengnsys_git_installer.py
# Funcionamiento
## Metadatos
Git no es capaz de almacenar datos de atributos extendidos, sockets y otros tipos de archivos especiales. El gitlib los almacena en .opengnsys-metadata en
el raíz del repositorio.
Los datos se guardan en archivos de tipo `jsonl`, una estructura de JSON por linea. Esto es para facilitar aplicaciones parciales solo aplicando el efecto de las lineas necesarias.
Existen estos archivos:
* `acls.jsonl`: ACLs
* `empty_directories.jsonl`: Directorios vacíos, ya que Git no es capaz de guardarlos
* `filesystems.json`: Información sobre sistemas de archivos: tipos, tamaños, UUIDs
* `gitignores.jsonl`: Lista de archivos .gitignore (los renombramos para que no interfieran con git)
* `metadata.json`: Metadatos generales acerca del repositorio
* `special_files.jsonl`: Archivos especiales como sockets
* `xattrs.jsonl`: Atributos extendidos
* `renamed.jsonl`: Archivos renombrados para no interferir con Git
* `unix_permissions.jsonl`: Permisos UNIX (Git no los almacena exactamente)
* `ntfs_secaudit.txt`: Datos de seguridad de NTFS
* `efi_data`: Copia de la partición EFI (ESP)
* `efi_data.(id)`: Copia de la partición EFI correspondiente a un equipo especifico.
* `efi_data.(nombre)`: Copia de la partición EFI correspondiente a un nombre especificado por el administrador.

View File

@ -1,25 +0,0 @@
# Instalar de Admin
. venv/bin/activate
./opengnsys_git_installer.py
# Inicializar el repo a partir de los datos de una maquina modelo:
Ejecutar en oglive corriendo en la maquina modelo
. venv/bin/activate
./gitlib.py --init-repo-from /dev/sda2 --repo linux
# Usar git para desplegar sobre una maquina nueva:
Ejecutar en oglive corriendo en la maquina de destino.
Preparar el disco creando partición boot/EFI y partición de datos.
. venv/bin/activate
./gitlib.py --clone-repo-to /dev/sda2 --repo linux --boot-device /dev/sda

View File

@ -1,346 +0,0 @@
#!/usr/bin/env python3
import hivex
import argparse
import struct
from hivex import Hivex
from hivex.hive_types import *
# Docs:
#
# https://www.geoffchappell.com/notes/windows/boot/bcd/objects.htm
# https://learn.microsoft.com/en-us/previous-versions/windows/desktop/bcd/bcdbootmgrelementtypes
#print(f"Root: {root}")
BCD_Enumerations = {
"BcdLibraryDevice_ApplicationDevice" : 0x11000001,
"BcdLibraryString_ApplicationPath" : 0x12000002,
"BcdLibraryString_Description" : 0x12000004,
"BcdLibraryString_PreferredLocale" : 0x12000005,
"BcdLibraryObjectList_InheritedObjects" : 0x14000006,
"BcdLibraryInteger_TruncatePhysicalMemory" : 0x15000007,
"BcdLibraryObjectList_RecoverySequence" : 0x14000008,
"BcdLibraryBoolean_AutoRecoveryEnabled" : 0x16000009,
"BcdLibraryIntegerList_BadMemoryList" : 0x1700000a,
"BcdLibraryBoolean_AllowBadMemoryAccess" : 0x1600000b,
"BcdLibraryInteger_FirstMegabytePolicy" : 0x1500000c,
"BcdLibraryInteger_RelocatePhysicalMemory" : 0x1500000D,
"BcdLibraryInteger_AvoidLowPhysicalMemory" : 0x1500000E,
"BcdLibraryBoolean_DebuggerEnabled" : 0x16000010,
"BcdLibraryInteger_DebuggerType" : 0x15000011,
"BcdLibraryInteger_SerialDebuggerPortAddress" : 0x15000012,
"BcdLibraryInteger_SerialDebuggerPort" : 0x15000013,
"BcdLibraryInteger_SerialDebuggerBaudRate" : 0x15000014,
"BcdLibraryInteger_1394DebuggerChannel" : 0x15000015,
"BcdLibraryString_UsbDebuggerTargetName" : 0x12000016,
"BcdLibraryBoolean_DebuggerIgnoreUsermodeExceptions" : 0x16000017,
"BcdLibraryInteger_DebuggerStartPolicy" : 0x15000018,
"BcdLibraryString_DebuggerBusParameters" : 0x12000019,
"BcdLibraryInteger_DebuggerNetHostIP" : 0x1500001A,
"BcdLibraryInteger_DebuggerNetPort" : 0x1500001B,
"BcdLibraryBoolean_DebuggerNetDhcp" : 0x1600001C,
"BcdLibraryString_DebuggerNetKey" : 0x1200001D,
"BcdLibraryBoolean_EmsEnabled" : 0x16000020,
"BcdLibraryInteger_EmsPort" : 0x15000022,
"BcdLibraryInteger_EmsBaudRate" : 0x15000023,
"BcdLibraryString_LoadOptionsString" : 0x12000030,
"BcdLibraryBoolean_DisplayAdvancedOptions" : 0x16000040,
"BcdLibraryBoolean_DisplayOptionsEdit" : 0x16000041,
"BcdLibraryDevice_BsdLogDevice" : 0x11000043,
"BcdLibraryString_BsdLogPath" : 0x12000044,
"BcdLibraryBoolean_GraphicsModeDisabled" : 0x16000046,
"BcdLibraryInteger_ConfigAccessPolicy" : 0x15000047,
"BcdLibraryBoolean_DisableIntegrityChecks" : 0x16000048,
"BcdLibraryBoolean_AllowPrereleaseSignatures" : 0x16000049,
"BcdLibraryString_FontPath" : 0x1200004A,
"BcdLibraryInteger_SiPolicy" : 0x1500004B,
"BcdLibraryInteger_FveBandId" : 0x1500004C,
"BcdLibraryBoolean_ConsoleExtendedInput" : 0x16000050,
"BcdLibraryInteger_GraphicsResolution" : 0x15000052,
"BcdLibraryBoolean_RestartOnFailure" : 0x16000053,
"BcdLibraryBoolean_GraphicsForceHighestMode" : 0x16000054,
"BcdLibraryBoolean_IsolatedExecutionContext" : 0x16000060,
"BcdLibraryBoolean_BootUxDisable" : 0x1600006C,
"BcdLibraryBoolean_BootShutdownDisabled" : 0x16000074,
"BcdLibraryIntegerList_AllowedInMemorySettings" : 0x17000077,
"BcdLibraryBoolean_ForceFipsCrypto" : 0x16000079,
"BcdBootMgrObjectList_DisplayOrder" : 0x24000001,
"BcdBootMgrObjectList_BootSequence" : 0x24000002,
"BcdBootMgrObject_DefaultObject" : 0x23000003,
"BcdBootMgrInteger_Timeout" : 0x25000004,
"BcdBootMgrBoolean_AttemptResume" : 0x26000005,
"BcdBootMgrObject_ResumeObject" : 0x23000006,
"BcdBootMgrObjectList_ToolsDisplayOrder" : 0x24000010,
"BcdBootMgrBoolean_DisplayBootMenu" : 0x26000020,
"BcdBootMgrBoolean_NoErrorDisplay" : 0x26000021,
"BcdBootMgrDevice_BcdDevice" : 0x21000022,
"BcdBootMgrString_BcdFilePath" : 0x22000023,
"BcdBootMgrBoolean_ProcessCustomActionsFirst" : 0x26000028,
"BcdBootMgrIntegerList_CustomActionsList" : 0x27000030,
"BcdBootMgrBoolean_PersistBootSequence" : 0x26000031,
"BcdDeviceInteger_RamdiskImageOffset" : 0x35000001,
"BcdDeviceInteger_TftpClientPort" : 0x35000002,
"BcdDeviceInteger_SdiDevice" : 0x31000003,
"BcdDeviceInteger_SdiPath" : 0x32000004,
"BcdDeviceInteger_RamdiskImageLength" : 0x35000005,
"BcdDeviceBoolean_RamdiskExportAsCd" : 0x36000006,
"BcdDeviceInteger_RamdiskTftpBlockSize" : 0x36000007,
"BcdDeviceInteger_RamdiskTftpWindowSize" : 0x36000008,
"BcdDeviceBoolean_RamdiskMulticastEnabled" : 0x36000009,
"BcdDeviceBoolean_RamdiskMulticastTftpFallback" : 0x3600000A,
"BcdDeviceBoolean_RamdiskTftpVarWindow" : 0x3600000B,
"BcdMemDiagInteger_PassCount" : 0x25000001,
"BcdMemDiagInteger_FailureCount" : 0x25000003,
"Reserved1" : 0x21000001,
"Reserved2" : 0x22000002,
"BcdResumeBoolean_UseCustomSettings" : 0x26000003,
"BcdResumeDevice_AssociatedOsDevice" : 0x21000005,
"BcdResumeBoolean_DebugOptionEnabled" : 0x26000006,
"BcdResumeInteger_BootMenuPolicy" : 0x25000008,
"BcdOSLoaderDevice_OSDevice" : 0x21000001,
"BcdOSLoaderString_SystemRoot" : 0x22000002,
"BcdOSLoaderObject_AssociatedResumeObject" : 0x23000003,
"BcdOSLoaderBoolean_DetectKernelAndHal" : 0x26000010,
"BcdOSLoaderString_KernelPath" : 0x22000011,
"BcdOSLoaderString_HalPath" : 0x22000012,
"BcdOSLoaderString_DbgTransportPath" : 0x22000013,
"BcdOSLoaderInteger_NxPolicy" : 0x25000020,
"BcdOSLoaderInteger_PAEPolicy" : 0x25000021,
"BcdOSLoaderBoolean_WinPEMode" : 0x26000022,
"BcdOSLoaderBoolean_DisableCrashAutoReboot" : 0x26000024,
"BcdOSLoaderBoolean_UseLastGoodSettings" : 0x26000025,
"BcdOSLoaderBoolean_AllowPrereleaseSignatures" : 0x26000027,
"BcdOSLoaderBoolean_NoLowMemory" : 0x26000030,
"BcdOSLoaderInteger_RemoveMemory" : 0x25000031,
"BcdOSLoaderInteger_IncreaseUserVa" : 0x25000032,
"BcdOSLoaderBoolean_UseVgaDriver" : 0x26000040,
"BcdOSLoaderBoolean_DisableBootDisplay" : 0x26000041,
"BcdOSLoaderBoolean_DisableVesaBios" : 0x26000042,
"BcdOSLoaderBoolean_DisableVgaMode" : 0x26000043,
"BcdOSLoaderInteger_ClusterModeAddressing" : 0x25000050,
"BcdOSLoaderBoolean_UsePhysicalDestination" : 0x26000051,
"BcdOSLoaderInteger_RestrictApicCluster" : 0x25000052,
"BcdOSLoaderBoolean_UseLegacyApicMode" : 0x26000054,
"BcdOSLoaderInteger_X2ApicPolicy" : 0x25000055,
"BcdOSLoaderBoolean_UseBootProcessorOnly" : 0x26000060,
"BcdOSLoaderInteger_NumberOfProcessors" : 0x25000061,
"BcdOSLoaderBoolean_ForceMaximumProcessors" : 0x26000062,
"BcdOSLoaderBoolean_ProcessorConfigurationFlags" : 0x25000063,
"BcdOSLoaderBoolean_MaximizeGroupsCreated" : 0x26000064,
"BcdOSLoaderBoolean_ForceGroupAwareness" : 0x26000065,
"BcdOSLoaderInteger_GroupSize" : 0x25000066,
"BcdOSLoaderInteger_UseFirmwarePciSettings" : 0x26000070,
"BcdOSLoaderInteger_MsiPolicy" : 0x25000071,
"BcdOSLoaderInteger_SafeBoot" : 0x25000080,
"BcdOSLoaderBoolean_SafeBootAlternateShell" : 0x26000081,
"BcdOSLoaderBoolean_BootLogInitialization" : 0x26000090,
"BcdOSLoaderBoolean_VerboseObjectLoadMode" : 0x26000091,
"BcdOSLoaderBoolean_KernelDebuggerEnabled" : 0x260000a0,
"BcdOSLoaderBoolean_DebuggerHalBreakpoint" : 0x260000a1,
"BcdOSLoaderBoolean_UsePlatformClock" : 0x260000A2,
"BcdOSLoaderBoolean_ForceLegacyPlatform" : 0x260000A3,
"BcdOSLoaderInteger_TscSyncPolicy" : 0x250000A6,
"BcdOSLoaderBoolean_EmsEnabled" : 0x260000b0,
"BcdOSLoaderInteger_DriverLoadFailurePolicy" : 0x250000c1,
"BcdOSLoaderInteger_BootMenuPolicy" : 0x250000C2,
"BcdOSLoaderBoolean_AdvancedOptionsOneTime" : 0x260000C3,
"BcdOSLoaderInteger_BootStatusPolicy" : 0x250000E0,
"BcdOSLoaderBoolean_DisableElamDrivers" : 0x260000E1,
"BcdOSLoaderInteger_HypervisorLaunchType" : 0x250000F0,
"BcdOSLoaderBoolean_HypervisorDebuggerEnabled" : 0x260000F2,
"BcdOSLoaderInteger_HypervisorDebuggerType" : 0x250000F3,
"BcdOSLoaderInteger_HypervisorDebuggerPortNumber" : 0x250000F4,
"BcdOSLoaderInteger_HypervisorDebuggerBaudrate" : 0x250000F5,
"BcdOSLoaderInteger_HypervisorDebugger1394Channel" : 0x250000F6,
"BcdOSLoaderInteger_BootUxPolicy" : 0x250000F7,
"BcdOSLoaderString_HypervisorDebuggerBusParams" : 0x220000F9,
"BcdOSLoaderInteger_HypervisorNumProc" : 0x250000FA,
"BcdOSLoaderInteger_HypervisorRootProcPerNode" : 0x250000FB,
"BcdOSLoaderBoolean_HypervisorUseLargeVTlb" : 0x260000FC,
"BcdOSLoaderInteger_HypervisorDebuggerNetHostIp" : 0x250000FD,
"BcdOSLoaderInteger_HypervisorDebuggerNetHostPort" : 0x250000FE,
"BcdOSLoaderInteger_TpmBootEntropyPolicy" : 0x25000100,
"BcdOSLoaderString_HypervisorDebuggerNetKey" : 0x22000110,
"BcdOSLoaderBoolean_HypervisorDebuggerNetDhcp" : 0x26000114,
"BcdOSLoaderInteger_HypervisorIommuPolicy" : 0x25000115,
"BcdOSLoaderInteger_XSaveDisable" : 0x2500012b
}
def format_value(bcd, bcd_value):
name = bcd.value_key(bcd_value)
(type, length) = bcd.value_type(bcd_value)
typename = ""
str_value = ""
if type == REG_SZ:
typename = "SZ"
str_value = bcd.value_string(bcd_value)
elif type == REG_DWORD:
typename = "DWORD"
dval = bcd.value_dword(bcd_value)
str_value = hex(dval) + " (" + str(bcd.value_dword(bcd_value)) + ")"
elif type == REG_BINARY:
typename = "BIN"
(length, value) = bcd.value_value(bcd_value)
str_value = value.hex()
elif type == REG_DWORD_BIG_ENDIAN:
typename = "DWORD_BE"
elif type == REG_EXPAND_SZ:
typename = "EXPAND SZ"
elif type == REG_FULL_RESOURCE_DESCRIPTOR:
typename = "RES DESC"
elif type == REG_LINK:
typename = "LINK"
elif type == REG_MULTI_SZ:
typename = "MULTISZ"
(length, str_value) = bcd.value_value(bcd_value)
str_value = str_value.decode('utf-16le')
str_value = str_value.replace("\0", ";")
#value = ";".join("\0".split(value))
elif type == REG_NONE:
typename = "NONE"
elif type == REG_QWORD:
typename = "QWORD"
elif type == REG_RESOURCE_LIST:
typename = "RES LIST"
elif type == REG_RESOURCE_REQUIREMENTS_LIST:
typename = "REQ LIST"
else:
typename = str(type)
str_value = "???"
return (typename, length, str_value)
def dump_all(root, depth = 0):
padding = "\t" * depth
children = bcd.node_children(root)
if len(children) > 0:
for child in children:
name = bcd.node_name(child)
print(f"{padding}{name}")
dump_all(child, depth + 1)
# print(f"Child: {child}")
#print(f"Values: {num_vals}")
return
values = bcd.node_values(root)
#print(f"Value list: {values}")
for v in values:
(type_name, length, str_value) = format_value(bcd, v)
name = bcd.value_key(v)
print(f"{padding}{name: <16}: [{type_name: <10}]; ({length: < 4}) {str_value}")
class WindowsBCD:
def __init__(self, filename):
self.filename = filename
self.bcd = Hivex(filename)
def dump(self, root=None, depth = 0):
padding = "\t" * depth
if root is None:
root = self.bcd.root()
children = self.bcd.node_children(root)
if len(children) > 0:
for child in children:
name = self.bcd.node_name(child)
print(f"{padding}{name}")
self.dump(child, depth + 1)
return
values = self.bcd.node_values(root)
for v in values:
(type_name, length, str_value) = format_value(self.bcd, v)
name = self.bcd.value_key(v)
print(f"{padding}{name: <16}: [{type_name: <10}]; ({length: < 4}) {str_value}")
def list(self):
root = self.bcd.root()
objects = self.bcd.node_get_child(root, "Objects")
for child in self.bcd.node_children(objects):
entry_id = self.bcd.node_name(child)
elements = self.bcd.node_get_child(child, "Elements")
description_entry = self.bcd.node_get_child(elements, "12000004")
if description_entry:
values = self.bcd.node_values(description_entry)
if values:
(type_name, length, str_value) = format_value(self.bcd, values[0])
print(f"{entry_id}: {str_value}")
else:
print(f"{entry_id}: [no description value!?]")
appdevice_entry = self.bcd.node_get_child(elements, "11000001")
if appdevice_entry:
values = self.bcd.node_values(appdevice_entry)
(length, data) = self.bcd.value_value(values[0])
hex = data.hex()
print(f"LEN: {length}, HEX: {hex}, RAW: {data}")
if len(data) > 10:
etype = struct.unpack_from('<I', data, offset = 16)
print(f"Type: {etype}")
else:
print(f"{entry_id}: [no description entry 12000004]")
parser = argparse.ArgumentParser(
prog="Windows BCD parser",
description="Parses the BCD",
)
parser.add_argument("--db", type=str, metavar='BCD file', help="Database to use")
parser.add_argument("--dump", action='store_true', help="Dumps the specified database")
parser.add_argument("--list", action='store_true', help="Lists boot entries in the specified database")
args = parser.parse_args()
bcdobj = WindowsBCD(args.db)
if args.dump:
# "/home/vadim/opengnsys/winboot/boot-copy/EFI/Microsoft/Boot/BCD"
#bcd = Hivex(args.dump)
#root = bcd.root()
#dump_all(root)
bcdobj.dump()
elif args.list:
bcdobj.list()

View File

@ -1,115 +0,0 @@
import logging
import subprocess
import re
# pylint: disable=locally-disabled, line-too-long, logging-fstring-interpolation, too-many-lines
class DiskLibrary:
def __init__(self):
self.logger = logging.getLogger("OpengnsysDiskLibrary")
self.logger.setLevel(logging.DEBUG)
def split_device_partition(self, device):
"""
Parses a device file like /dev/sda3 into the root device (/dev/sda) and partition number (3)
Args:
device (str): Device in /dev
Returns:
[base_device, partno]
"""
r = re.compile("^(.*?)(\\d+)$")
m = r.match(device)
disk = m.group(1)
partno = int(m.group(2))
self.logger.debug(f"{device} parsed into disk device {disk}, partition {partno}")
return (disk, partno)
def get_disk_json_data(self, device):
"""
Returns the partition JSON data dump for the entire disk, even if a partition is passed.
This is specifically in the format used by sfdisk.
Args:
device (str): Block device, eg, /dev/sda3
Returns:
str: JSON dump produced by sfdisk
"""
(disk, partno) = self.split_device_partition(device)
result = subprocess.run(["/usr/sbin/sfdisk", "--json", disk], check=True, capture_output=True, encoding='utf-8')
return result.stdout.strip()
def get_disk_uuid(self, device):
"""
Returns the UUID of the disk itself, if there's a GPT partition table.
Args:
device (str): Block device, eg, /dev/sda3
Returns:
str: UUID
"""
(disk, partno) = self.split_device_partition(device)
result = subprocess.run(["/usr/sbin/sfdisk", "--disk-id", disk], check=True, capture_output=True, encoding='utf-8')
return result.stdout.strip()
def set_disk_uuid(self, device, uuid):
(disk, partno) = self.split_device_partition(device)
subprocess.run(["/usr/sbin/sfdisk", "--disk-id", disk, uuid], check=True, encoding='utf-8')
def get_partition_uuid(self, device):
"""
Returns the UUID of the partition, if there's a GPT partition table.
Args:
device (str): Block device, eg, /dev/sda3
Returns:
str: UUID
"""
(disk, partno) = self.split_device_partition(device)
result = subprocess.run(["/usr/sbin/sfdisk", "--part-uuid", disk, str(partno)], check=True, capture_output=True, encoding='utf-8')
return result.stdout.strip()
def set_partition_uuid(self, device, uuid):
(disk, partno) = self.split_device_partition(device)
subprocess.run(["/usr/sbin/sfdisk", "--part-uuid", disk, str(partno), uuid], check=True, encoding='utf-8')
def get_partition_type(self, device):
"""
Returns the type UUID of the partition, if there's a GPT partition table.
Args:
device (str): Block device, eg, /dev/sda3
Returns:
str: UUID
"""
(disk, partno) = self.split_device_partition(device)
result = subprocess.run(["/usr/sbin/sfdisk", "--part-type", disk, str(partno)], check=True, capture_output=True, encoding='utf-8')
return result.stdout.strip()
def set_partition_type(self, device, uuid):
(disk, partno) = self.split_device_partition(device)
subprocess.run(["/usr/sbin/sfdisk", "--part-type", disk, str(partno), uuid], check=True, encoding='utf-8')

View File

@ -1,544 +0,0 @@
import logging
import subprocess
import os
import json
import blkid
import time
from ntfs import *
# pylint: disable=locally-disabled, line-too-long, logging-fstring-interpolation, too-many-lines
class FilesystemLibrary:
def __init__(self, ntfs_implementation = NTFSImplementation.KERNEL):
self.logger = logging.getLogger("OpengnsysFilesystemLibrary")
self.logger.setLevel(logging.DEBUG)
self.mounts = {}
self.base_mount_path = "/mnt"
self.ntfs_implementation = ntfs_implementation
self.update_mounts()
def _rmmod(self, module):
self.logger.debug("Trying to unload module {module}...")
subprocess.run(["/usr/sbin/rmmod", module], check=False)
def _modprobe(self, module):
self.logger.debug("Trying to load module {module}...")
subprocess.run(["/usr/sbin/modprobe", module], check=True)
# _parse_mounts
def update_mounts(self):
"""
Update the current mount points by parsing the /proc/mounts file.
This method reads the /proc/mounts file to gather information about
the currently mounted filesystems. It stores this information in a
dictionary where the keys are the mount points and the values are
dictionaries containing details about each filesystem.
The details stored for each filesystem include:
- device: The device file associated with the filesystem.
- mountpoint: The directory where the filesystem is mounted.
- type: The type of the filesystem (e.g., ext4, vfat).
- options: Mount options associated with the filesystem.
- dump_freq: The dump frequency for the filesystem.
- passno: The pass number for filesystem checks.
The method also adds an entry for each mount point with a trailing
slash to ensure consistency in accessing the mount points.
Attributes:
mounts (dict): A dictionary where keys are mount points and values
are dictionaries containing filesystem details.
"""
filesystems = {}
self.logger.debug("Parsing /proc/mounts")
with open("/proc/mounts", 'r', encoding='utf-8') as mounts:
for line in mounts:
parts = line.split()
data = {}
data['device'] = parts[0]
data['mountpoint'] = parts[1]
data['type'] = parts[2]
data['options'] = parts[3]
data['dump_freq'] = parts[4]
data['passno'] = parts[5]
filesystems[data["mountpoint"]] = data
filesystems[data["mountpoint"] + "/"] = data
self.mounts = filesystems
def find_mountpoint(self, device):
"""
Find the mount point for a given device.
This method checks if the specified device is currently mounted and returns
the corresponding mount point if it is found.
Args:
device (str): The path to the device to check.
Returns:
str or None: The mount point of the device if it is mounted, otherwise None.
"""
norm = os.path.normpath(device)
self.logger.debug(f"Checking if {device} is mounted")
for mountpoint, mount in self.mounts.items():
#self.logger.debug(f"Item: {mount}")
#self.logger.debug(f"Checking: " + mount['device'])
if mount['device'] == norm:
return mountpoint
return None
def find_device(self, mountpoint):
"""
Find the device corresponding to a given mount point.
Args:
mountpoint (str): The mount point to search for.
Returns:
str or None: The device corresponding to the mount point if found,
otherwise None.
"""
self.update_mounts()
self.logger.debug("Finding device corresponding to mount point %s", mountpoint)
if mountpoint in self.mounts:
return self.mounts[mountpoint]['device']
else:
self.logger.warning("Failed to find mountpoint %s", mountpoint)
return None
def is_mounted(self, device = None, mountpoint = None):
def is_mounted(self, device=None, mountpoint=None):
"""
Check if a device or mountpoint is currently mounted.
Either checking by device or mountpoint is valid.
Args:
device (str, optional): The device to check if it is mounted.
Defaults to None.
mountpoint (str, optional): The mountpoint to check if it is mounted.
Defaults to None.
Returns:
bool: True if the device is mounted or the mountpoint is in the list
of mounts, False otherwise.
"""
self.update_mounts()
if device:
return not self.find_mountpoint(device) is None
else:
return mountpoint in self.mounts
def unmount(self, device = None, mountpoint = None):
def unmount(self, device=None, mountpoint=None):
"""
Unmounts a filesystem.
This method unmounts a filesystem either by the device name or the mountpoint.
If a device is provided, it finds the corresponding mountpoint and unmounts it.
If a mountpoint is provided directly, it unmounts the filesystem at that mountpoint.
Args:
device (str, optional): The device name to unmount. Defaults to None.
mountpoint (str, optional): The mountpoint to unmount. Defaults to None.
Raises:
subprocess.CalledProcessError: If the unmount command fails.
Logs:
Debug information about the unmounting process.
"""
if device:
self.logger.debug("Finding mountpoint of %s", device)
mountpoint = self.find_mountpoint(device)
if not mountpoint is None:
self.logger.debug(f"Unmounting {mountpoint}")
done = False
start_time = time.time()
timeout = 60
while not done and (time.time() - start_time) < timeout:
ret = subprocess.run(["/usr/bin/umount", mountpoint], check=False, capture_output=True, encoding='utf-8')
if ret.returncode == 0:
done=True
else:
if "target is busy" in ret.stderr:
self.logger.debug("Filesystem busy, waiting. %.1f seconds left", timeout - (time.time() - start_time))
time.sleep(0.1)
else:
raise subprocess.CalledProcessError(ret.returncode, ret.args, output=ret.stdout, stderr=ret.stderr)
# We've unmounted a new filesystem, update our filesystems list
self.update_mounts()
else:
self.logger.debug(f"{device} is not mounted")
def mount(self, device, mountpoint, filesystem = None):
"""
Mounts a device to a specified mountpoint.
Parameters:
device (str): The device to be mounted (e.g., '/dev/sda1').
mountpoint (str): The directory where the device will be mounted.
filesystem (str, optional): The type of filesystem to be used (e.g., 'ext4', 'ntfs'). Defaults to None.
Raises:
subprocess.CalledProcessError: If the mount command fails.
Logs:
Debug information about the mounting process, including the mount command, return code, stdout, and stderr.
Side Effects:
Creates the mountpoint directory if it does not exist.
Updates the internal list of mounted filesystems.
"""
self.logger.debug(f"Mounting {device} at {mountpoint}")
if not os.path.exists(mountpoint):
self.logger.debug(f"Creating directory {mountpoint}")
os.mkdir(mountpoint)
mount_cmd = ["/usr/bin/mount"]
if not filesystem is None:
mount_cmd = mount_cmd + ["-t", filesystem]
mount_cmd = mount_cmd + [device, mountpoint]
self.logger.debug(f"Mount command: {mount_cmd}")
result = subprocess.run(mount_cmd, check=True, capture_output = True)
self.logger.debug(f"retorno: {result.returncode}")
self.logger.debug(f"stdout: {result.stdout}")
self.logger.debug(f"stderr: {result.stderr}")
# We've mounted a new filesystem, update our filesystems list
self.update_mounts()
def ensure_mounted(self, device):
"""
Ensure that the given device is mounted.
This method attempts to mount the specified device to a path derived from
the base mount path and the device's basename. If the device is of type NTFS,
it uses the NTFSLibrary to handle the mounting process. For other filesystem
types, it uses a generic mount method.
Args:
device (str): The path to the device that needs to be mounted.
Returns:
str: The path where the device is mounted.
Logs:
- Info: When starting the mounting process.
- Debug: Various debug information including the mount path, filesystem type,
and success message.
Raises:
OSError: If there is an error creating the mount directory or mounting the device.
"""
self.logger.info("Mounting %s", device)
self.unmount(device = device)
path = os.path.join(self.base_mount_path, os.path.basename(device))
self.logger.debug(f"Will mount repo at {path}")
if not os.path.exists(path):
os.mkdir(path)
if self.filesystem_type(device) == "ntfs":
self.logger.debug("Handing a NTFS filesystem")
self._modprobe("ntfs3")
self.ntfsfix(device)
ntfs = NTFSLibrary(self.ntfs_implementation)
ntfs.mount_filesystem(device, path)
self.update_mounts()
else:
self.logger.debug("Handling a non-NTFS filesystem")
self.mount(device, path)
self.logger.debug("Successfully mounted at %s", path)
return path
def filesystem_type(self, device = None, mountpoint = None):
"""
Determine the filesystem type of a given device or mountpoint.
Args:
device (str, optional): The device to probe. If not provided, the device
will be determined based on the mountpoint.
mountpoint (str, optional): The mountpoint to find the device for. This
is used only if the device is not provided.
Returns:
str: The filesystem type of the device.
Raises:
KeyError: If the filesystem type cannot be determined from the probe.
Logs:
Debug: Logs the process of finding the device, probing the device, and
the determined filesystem type.
"""
if device is None:
self.logger.debug("Finding device for mountpoint %s", mountpoint)
device = self.find_device(mountpoint)
self.logger.debug(f"Probing {device}")
pr = blkid.Probe()
pr.set_device(device)
pr.enable_superblocks(True)
pr.set_superblocks_flags(blkid.SUBLKS_TYPE | blkid.SUBLKS_USAGE | blkid.SUBLKS_UUID | blkid.SUBLKS_UUIDRAW | blkid.SUBLKS_LABELRAW)
pr.do_safeprobe()
fstype = pr["TYPE"].decode('utf-8')
self.logger.debug(f"FS type is {fstype}")
return fstype
def is_filesystem(self, path):
"""
Check if the given path is a filesystem root.
Args:
path (str): The path to check.
Returns:
bool: True if the path is a filesystem root, False otherwise.
"""
# This is just an alias for better code readability
return self.is_mounted(mountpoint = path)
def create_filesystem(self, fs_type = None, fs_uuid = None, device = None):
"""
Create a filesystem on the specified device.
Parameters:
fs_type (str): The type of filesystem to create (e.g., 'ntfs', 'ext4', 'xfs', 'btrfs').
fs_uuid (str): The UUID to assign to the filesystem.
device (str): The device on which to create the filesystem (e.g., '/dev/sda1').
Raises:
RuntimeError: If the filesystem type is not recognized or if the filesystem creation command fails.
"""
self.logger.info(f"Creating filesystem {fs_type} with UUID {fs_uuid} in {device}")
if fs_type == "ntfs" or fs_type == "ntfs3":
self.logger.debug("Creating NTFS filesystem")
ntfs = NTFSLibrary(self.ntfs_implementation)
ntfs.create_filesystem(device, "NTFS")
ntfs.modify_uuid(device, fs_uuid)
else:
command = [f"/usr/sbin/mkfs.{fs_type}"]
command_args = []
if fs_type == "ext4" or fs_type == "ext3":
command_args = ["-U", fs_uuid, "-F", device]
elif fs_type == "xfs":
command_args = ["-m", f"uuid={fs_uuid}", "-f", device]
elif fs_type == "btrfs":
command_args = ["-U", fs_uuid, "-f", device]
else:
raise RuntimeError(f"Don't know how to create filesystem of type {fs_type}")
command = command + command_args
self.logger.debug(f"Creating Linux filesystem of type {fs_type} on {device}, command {command}")
result = subprocess.run(command, check = True, capture_output=True)
self.logger.debug(f"retorno: {result.returncode}")
self.logger.debug(f"stdout: {result.stdout}")
self.logger.debug(f"stderr: {result.stderr}")
def mklostandfound(self, path):
"""
Recreate the lost+found if necessary.
When cloning at the root of a filesystem, cleaning the contents
removes the lost+found directory. This is a special directory that requires the use of
a tool to recreate it.
It may fail if the filesystem does not need it. We consider this harmless and ignore it.
The command is entirely skipped on NTFS, as mklost+found may malfunction if run on it,
and has no useful purpose.
"""
if self.is_filesystem(path):
if self.filesystem_type(mountpoint=path) == "ntfs":
self.logger.debug("Not running mklost+found on NTFS")
return
curdir = os.getcwd()
result = None
try:
self.logger.debug(f"Re-creating lost+found in {path}")
os.chdir(path)
result = subprocess.run(["/usr/sbin/mklost+found"], check=True, capture_output=True)
except subprocess.SubprocessError as e:
self.logger.warning(f"Error running mklost+found: {e}")
if result:
self.logger.debug(f"retorno: {result.returncode}")
self.logger.debug(f"stdout: {result.stdout}")
self.logger.debug(f"stderr: {result.stderr}")
os.chdir(curdir)
def ntfsfix(self, device):
"""
Run the ntfsfix command on the specified device.
This method uses the ntfsfix utility to fix common NTFS problems on the given device.
This allows mounting an unclean NTFS filesystem.
Args:
device (str): The path to the device to be fixed.
Raises:
subprocess.CalledProcessError: If the ntfsfix command fails.
"""
self.logger.debug(f"Running ntfsfix on {device}")
subprocess.run(["/usr/bin/ntfsfix", "-d", device], check=True)
def unload_ntfs(self):
"""
Unloads the NTFS filesystem module.
This is a function added as a result of NTFS kernel module troubleshooting,
to try to ensure that NTFS code is only active as long as necessary.
The module is internally loaded as needed, so there's no load_ntfs function.
It may be removed in the future.
Raises:
RuntimeError: If the module cannot be removed.
"""
self._rmmod("ntfs3")
def find_boot_device(self):
"""
Searches for the EFI boot partition on the system.
This method scans the system's partitions to locate the EFI boot partition,
which is identified by the GUID "C12A7328-F81F-11D2-BA4B-00A0C93EC93B".
Returns:
str: The device node of the EFI partition if found, otherwise None.
Logs:
- Debug messages indicating the progress of the search.
- A warning message if the EFI partition is not found.
"""
disks = []
self.logger.debug("Looking for EFI partition")
with open("/proc/partitions", "r", encoding='utf-8') as partitions_file:
line_num=0
for line in partitions_file:
if line_num >=2:
data = line.split()
disk = data[3]
disks.append(disk)
self.logger.debug(f"Disk: {disk}")
line_num = line_num + 1
for disk in disks:
self.logger.debug("Loading partitions for disk %s", disk)
#disk_json_data = subprocess.run(["/usr/sbin/sfdisk", "-J", f"/dev/{disk}"], check=False, capture_output=True)
sfdisk_out = subprocess.run(["/usr/sbin/sfdisk", "-J", f"/dev/{disk}"], check=False, capture_output=True)
if sfdisk_out.returncode == 0:
disk_json_data = sfdisk_out.stdout
disk_data = json.loads(disk_json_data)
for part in disk_data["partitiontable"]["partitions"]:
self.logger.debug("Checking partition %s", part)
if part["type"] == "C12A7328-F81F-11D2-BA4B-00A0C93EC93B":
self.logger.debug("EFI partition found at %s", part["node"])
return part["node"]
else:
self.logger.debug("sfdisk returned with code %i, error %s", sfdisk_out.returncode, sfdisk_out.stderr)
self.logger.warning("Failed to find EFI partition!")
def temp_unmount(self, mountpoint):
"""
Temporarily unmounts the filesystem at the given mountpoint.
This method finds the device associated with the specified mountpoint,
and returns the information to remount it with temp_remount.
The purpose of this function is to temporarily unmount a filesystem for
actions like fsck, and to mount it back afterwards.
Args:
mountpoint (str): The mountpoint of the filesystem to unmount.
Returns:
dict: A dictionary containing the information needed to remount the filesystem.
"""
device = self.find_device(mountpoint)
fs = self.filesystem_type(mountpoint = mountpoint)
data = {"mountpoint" : mountpoint, "device" :device, "filesystem" : fs}
self.logger.debug("Temporarily unmounting device %s, mounted on %s, fs type %s", mountpoint, device, fs)
self.unmount(mountpoint = mountpoint)
return data
def temp_remount(self, unmount_data):
"""
Remounts a filesystem unmounted with temp_unmount
This method remounts a filesystem using the data provided by temp_unmount
Args:
unmount_data (dict): A dictionary containing the data needed to remount the filesystem.
Returns:
None
"""
self.logger.debug("Remounting temporarily unmounted device %s on %s, fs type %s", unmount_data["device"], unmount_data["mountpoint"], unmount_data["filesystem"])
self.mount(device = unmount_data["device"], mountpoint=unmount_data["mountpoint"], filesystem=unmount_data["filesystem"])

View File

@ -1,52 +0,0 @@
#!/usr/bin/env python3
import unittest
import logging
import os
import sys
import urllib.request
import tarfile
import subprocess
from shutil import rmtree
from pathlib import Path
parent_dir = str(Path(__file__).parent.parent.absolute())
sys.path.append(parent_dir)
sys.path.append("/opengnsys/installer")
print(parent_dir)
from gitlib import OpengnsysGitLibrary
class GitTests(unittest.TestCase):
def setUp(self):
self.logger = logging.getLogger("OpengnsysTest")
self.oggit = OpengnsysGitLibrary()
self.logger.info("setUp()")
if not hasattr(self, 'init_complete'):
self.init_complete = True
def test_init(self):
self.assertIsNotNone(self.oggit)
def test_acls(self):
self.oggit.ogCreateAcl()
def test_sync_local(self):
# self.oggit.ogSyncLocalGitImage()
None
if __name__ == '__main__':
logging.basicConfig(level=logging.DEBUG, format='%(asctime)s - %(name)20s - [%(levelname)5s] - %(message)s')
logger = logging.getLogger(__name__)
logger.setLevel(logging.DEBUG)
logger.info("Inicio del programa")
unittest.main()

File diff suppressed because it is too large Load Diff

View File

@ -1,22 +0,0 @@
def parse_kernel_cmdline():
"""Parse the kernel arguments to obtain configuration parameters in Oglive
OpenGnsys passes data in the kernel arguments, for example:
[...] group=Aula_virtual ogrepo=192.168.2.1 oglive=192.168.2.1 [...]
Returns:
dict: Dict of configuration parameters and their values.
"""
params = {}
with open("/proc/cmdline", encoding='utf-8') as cmdline:
line = cmdline.readline()
parts = line.split()
for part in parts:
if "=" in part:
key, value = part.split("=")
params[key] = value
return params

View File

@ -1,111 +0,0 @@
import logging
import subprocess
from enum import Enum
class NTFSImplementation(Enum):
KERNEL = 1
NTFS3G = 2
class NTFSLibrary:
"""
A library for managing NTFS filesystems.
Attributes:
logger (logging.Logger): Logger for the class.
implementation (NTFSImplementation): The implementation to use for mounting NTFS filesystems.
"""
def __init__(self, implementation):
"""
Initializes the instance with the given implementation.
Args:
implementation: The implementation to be used by the instance.
Attributes:
logger (logging.Logger): Logger instance for the class, set to debug level.
implementation: The implementation provided during initialization.
"""
self.logger = logging.getLogger("NTFSLibrary")
self.logger.setLevel(logging.DEBUG)
self.implementation = implementation
self.logger.debug("Initializing")
def create_filesystem(self, device, label):
"""
Creates an NTFS filesystem on the specified device with the given label.
Args:
device (str): The device path where the NTFS filesystem will be created.
label (str): The label to assign to the NTFS filesystem.
Returns:
None
Logs:
Logs the creation process with the device and label information.
"""
self.logger.info(f"Creating NTFS in {device} with label {label}")
subprocess.run(["/usr/sbin/mkntfs", device, "-Q", "-L", label], check=True)
def mount_filesystem(self, device, mountpoint):
"""
Mounts a filesystem on the specified mountpoint using the specified NTFS implementation.
Args:
device (str): The device path to be mounted (e.g., '/dev/sda1').
mountpoint (str): The directory where the device will be mounted.
Raises:
ValueError: If the NTFS implementation is unknown.
"""
self.logger.info(f"Mounting {device} in {mountpoint} using implementation {self.implementation}")
if self.implementation == NTFSImplementation.KERNEL:
subprocess.run(["/usr/bin/mount", "-t", "ntfs3", device, mountpoint], check = True)
elif self.implementation == NTFSImplementation.NTFS3G:
subprocess.run(["/usr/bin/ntfs-3g", device, mountpoint], check = True)
else:
raise ValueError("Unknown NTFS implementation: {self.implementation}")
def modify_uuid(self, device, uuid):
"""
Modify the UUID of an NTFS device.
This function changes the UUID of the specified NTFS device to the given UUID.
It reads the current UUID from the device, logs the change, and writes the new UUID.
Args:
device (str): The path to the NTFS device file.
uuid (str): The new UUID to be set, in hexadecimal string format.
Raises:
IOError: If there is an error opening or writing to the device file.
"""
ntfs_uuid_offset = 0x48
ntfs_uuid_length = 8
binary_uuid = bytearray.fromhex(uuid)
binary_uuid.reverse()
self.logger.info(f"Changing UUID on {device} to {uuid}")
with open(device, 'r+b') as ntfs_dev:
self.logger.debug("Reading %i bytes from offset %i", ntfs_uuid_length, ntfs_uuid_offset)
ntfs_dev.seek(ntfs_uuid_offset)
prev_uuid = bytearray(ntfs_dev.read(ntfs_uuid_length))
prev_uuid.reverse()
prev_uuid_hex = bytearray.hex(prev_uuid)
self.logger.debug(f"Previous UUID: {prev_uuid_hex}")
self.logger.debug("Writing...")
ntfs_dev.seek(ntfs_uuid_offset)
ntfs_dev.write(binary_uuid)

View File

@ -1,11 +0,0 @@
gitdb==4.0.11
GitPython==3.1.43
libarchive-c==5.1
nose==1.3.7
pathlib==1.0.1
pkg_resources==0.0.0
pylibacl==0.7.0
pylibblkid==0.3
pyxattr==0.8.1
smmap==5.0.1
tqdm==4.66.5

View File

@ -1,32 +1,59 @@
# Installing Dependencies for Python
# Git component installer
Converting the code to Python 3 currently requires the packages specified in `requirements.txt`.
This directory contains the installer for the git component for OpenGnsys.
To install Python dependencies, the `venv` module (https://docs.python.org/3/library/venv.html) is used, which installs all dependencies in an isolated environment separate from the system.
It downloads, installs and configures Forgejo, creates the default repositories and configures SSH keys.
# Quick Installation
## Ubuntu 24.04
sudo apt install python3-git opengnsys-libarchive-c python3-termcolor bsdextrautils
### Add the repository
## Add SSH Keys to oglive
The Git system accesses the ogrepository via SSH. To work, it needs the oglive to have an SSH key, and the ogrepository must accept it.
Create the file `/etc/apt/sources.list.d/opengnsys.sources` with these contents:
The Git installer can make the required changes with:
Types: deb
URIs: https://ognproject.evlt.uma.es/debian-opengnsys/
Suites: noble
Components: main
Signed-By:
-----BEGIN PGP PUBLIC KEY BLOCK-----
.
mDMEZzx/SxYJKwYBBAHaRw8BAQdAa83CuAJ5/+7Pn9LHT/k34EAGpx5FnT/ExHSj
XZG1JES0Ik9wZW5HbnN5cyA8b3Blbmduc3lzQG9wZW5nbnN5cy5lcz6ImQQTFgoA
QRYhBC+J38Xsso227ZbDVt2S5xJQRhKDBQJnPH9LAhsDBQkFo5qABQsJCAcCAiIC
BhUKCQgLAgQWAgMBAh4HAheAAAoJEN2S5xJQRhKDW/MBAO6swnpwdrbm48ypMyPh
NboxvF7rCqBqHWwRHvkvrq7pAP9zd98r7z2AvqVXZxnaCsLTUNMEL12+DVZAUZ1G
EquRBbg4BGc8f0sSCisGAQQBl1UBBQEBB0B6D6tkrwXSHi7ebGYsiMPntqwdkQ/S
84SFTlSxRqdXfgMBCAeIfgQYFgoAJhYhBC+J38Xsso227ZbDVt2S5xJQRhKDBQJn
PH9LAhsMBQkFo5qAAAoJEN2S5xJQRhKDJ+cBAM9jYbeq5VXkHLfODeVztgSXnSUe
yklJ18oQmpeK5eWeAQDKYk/P0R+1ZJDItxkeP6pw62bCDYGQDvdDGPMAaIT6CA==
=xcNc
-----END PGP PUBLIC KEY BLOCK-----
./opengnsys_git_installer.py --set-ssh-key
It's required to run `apt update` after creating this file
Or to do it for a specific oglive:
### Install packages
./opengnsys_git_installer.py --set-ssh-key --oglive 1 # oglive number
sudo apt install -y python3-git opengnsys-libarchive-c python3-termcolor python3-requests python3-tqdm bsdextrautils
Running this command automatically adds the SSH key to Forgejo.
## Adding SSH Keys to oglive
The existing key can be extracted with:
The Git system accesses the ogrepository via SSH. To function, it needs the oglive to have an SSH key, and for the ogrepository to accept it.
The Git installer can make the required changes by extracting an SSH key from an oglive and installing it in Forgejo. If there is a local ogboot installation, the installer will do this automatically. If there is not, it is necessary to provide the installer with an oglive from which to extract the key using the `--oglive-file` or `--oglive-url` parameter.
For example:
./opengnsys_git_installer.py --oglive-url https://example.com/ogLive-noble.iso
The installer will proceed to download the file, mount the ISO, and extract the key.
To perform the process after completing the installation and only add a key to an existing installation, use the `--set-ssh-key` parameter:
./opengnsys_git_installer.py --set-ssh-key --oglive-url https://example.com/ogLive-noble.iso
./opengnsys_git_installer.py --extract-ssh-key --quiet
# Running the Installer

View File

@ -1,35 +1,58 @@
# Instalación de dependencias para python
# Instalador de componente Git
La conversion del código a Python 3 requiere actualmente los paquetes especificados en `requirements.txt`
Para instalar dependencias de python se usa el modulo venv (https://docs.python.org/3/library/venv.html) que instala todas las dependencias en un entorno independiente del sistema.
Este directorio contiene el instalador de Git para OpenGnsys.
Descarga, instala y configura Forgejo, crea los repositorios por defecto, y configura claves de SSH.
# Instalación rápida
## Ubuntu 24.04
sudo apt install python3-git opengnsys-libarchive-c python3-termcolor bsdextrautils
### Agregar repositorio
Crear el archivo `/etc/apt/sources.list.d/opengnsys.sources` con este contenido:
Types: deb
URIs: https://ognproject.evlt.uma.es/debian-opengnsys/opengnsys
Suites: noble
Components: main
Signed-By:
-----BEGIN PGP PUBLIC KEY BLOCK-----
.
mDMEZzx/SxYJKwYBBAHaRw8BAQdAa83CuAJ5/+7Pn9LHT/k34EAGpx5FnT/ExHSj
XZG1JES0Ik9wZW5HbnN5cyA8b3Blbmduc3lzQG9wZW5nbnN5cy5lcz6ImQQTFgoA
QRYhBC+J38Xsso227ZbDVt2S5xJQRhKDBQJnPH9LAhsDBQkFo5qABQsJCAcCAiIC
BhUKCQgLAgQWAgMBAh4HAheAAAoJEN2S5xJQRhKDW/MBAO6swnpwdrbm48ypMyPh
NboxvF7rCqBqHWwRHvkvrq7pAP9zd98r7z2AvqVXZxnaCsLTUNMEL12+DVZAUZ1G
EquRBbg4BGc8f0sSCisGAQQBl1UBBQEBB0B6D6tkrwXSHi7ebGYsiMPntqwdkQ/S
84SFTlSxRqdXfgMBCAeIfgQYFgoAJhYhBC+J38Xsso227ZbDVt2S5xJQRhKDBQJn
PH9LAhsMBQkFo5qAAAoJEN2S5xJQRhKDJ+cBAM9jYbeq5VXkHLfODeVztgSXnSUe
yklJ18oQmpeK5eWeAQDKYk/P0R+1ZJDItxkeP6pw62bCDYGQDvdDGPMAaIT6CA==
=xcNc
-----END PGP PUBLIC KEY BLOCK-----
Es necesario ejecutar `apt update` después de crear el archivo.
### Instalar paquetes:
sudo apt install -y python3-git opengnsys-libarchive-c python3-termcolor python3-requests python3-tqdm bsdextrautils
## Agregar claves de SSH a oglive
El sistema de Git accede al ogrepository por SSH. Para funcionar, necesita que el oglive tenga una clave de SSH, y que el ogrepository la acepte.
El instalador de Git puede realizar los cambios requeridos, con:
El instalador de Git puede realizar los cambios requeridos, extrayendo una clave de SSH de un oglive e instalándola en Forgejo. Si hay una instalación de ogboot local, el instalador lo hará automáticamente. Si no la hay, es necesario darle al instalador un oglive del que extraer la clave con el parámetro `--oglive-file` o `--oglive-url`.
./opengnsys_git_installer.py --set-ssh-key
Por ejemplo:
O para hacerlo contra un oglive especifico:
./opengnsys_git_installer.py --oglive-url https://example.com/ogLive-noble.iso
./opengnsys_git_installer.py --set-ssh-key --oglive 1 # numero de oglive
El instalador procederá a descargar el archivo, montar el ISO, y extraer la clave.
Ejecutar este comando agrega la clave de SSH a Forgejo automáticamente.
Para hacer el proceso después de haber completado la instalación y solo agregar una clave a una instalación existente, usar el parámetro `--set-ssh-key`:
La clave existente puede extraerse con:
./opengnsys_git_installer.py --extract-ssh-key --quiet
./opengnsys_git_installer.py --set-ssh-key --oglive-url https://example.com/ogLive-noble.iso
# Ejecutar
@ -49,6 +72,8 @@ El usuario por defecto es `oggit` con password `opengnsys`.
El sistema OgGit requiere módulos de Python que no vienen en Ubuntu 24.04 o tienen versiones demasiado antiguas.
Los paquetes se pueden obtener desde el repositorio de OpenGnsys (ver arriba).
Los fuentes de los paquetes se encuentran en oggit/packages.
# Documentación de código fuente

View File

@ -0,0 +1,656 @@
opengnsys-gitinstaller (0.5dev3) UNRELEASED; urgency=medium
[ OpenGnsys ]
* Initial release.
[ Vadim Troshchinskiy ]
* First commit
* Add installer
* Add requirements file
[ lgromero ]
* refs #734 Creates first skeleton of symfony+swagger project
[ Vadim Troshchinskiy ]
* Add Gitlib
[ lgromero ]
* refs #734 Changes OgBootBundle name and adds a first endpoint to test
* refs #734 Adds template of repository and branch endpoints
[ Vadim Troshchinskiy ]
* Update docs to account for changes
* Trivial API server
* Ticket #753: Add repository listing
* Ticket #735: List branches in repo
* Add testing instructions
* Agregar manejo de errrores
* Ticket #741: Crear repo Ticket #736: Eliminar repo
[ lgromero ]
* refs #734 Adds README for Api installation
* refs #734 Control of errores and http codes in controler
* refs #734 Renemas oggitservice
[ Vadim Troshchinskiy ]
* Ticket #738, ticket #739: repo and sync backup protoype
[ lgromero ]
* refs #734 Adds new endpoints sync and backup and status endpoint
* refs #734 Adds nelmio api doc configuration
* Adds .env file to root
* refs #734 use environment variables in .env files and disable web depuration toolbar
* refs #734 fix typo in .env and use oggit_url environment variable
[ Vadim Troshchinskiy ]
* Ticket #738, ticket #739: git sync and backup
[ Nicolas Arenas ]
* Add docker container files
[ Vadim Troshchinskiy ]
* Ticket #737: GC
* Use Paramiko and Gitpython for backups
[ Nicolas Arenas ]
* Add mock api for testing dockerfile
[ Vadim Troshchinskiy ]
* Ticket #740, listen on all hosts
[ lgromero ]
* refs #734 Removes innecesaries parameters and changes php platform to 8.2
* refs #734 just changes name and description in swagger web page
[ Vadim Troshchinskiy ]
* Remove duplicated import
* Documentation prototype
* Update to 24.04, solves deployment issue
* Add more documentation
* Add API README
* Add API examples
* Update list of package requirements in oglive
* Fix commandline parsing bug
* Revert experimental Windows change
* Fix ticket #770: Re-parse filesystems list after mounting
* Use oglive server if ogrepository is not set
* Ticket #770: Add sanity check
* Ticket #771: Correctly create directories on metadata restoration
* Ticket #780: Unmount before clone if needed
* Fix ticket #800: sudo doesn't work
[ Vadim Trochinsky ]
* Fix ticket #802: .git directory in filesystem root
[ Vadim Troshchinskiy ]
* Fix ticket #805: Remove .git directory if it already exists when checking out
* Ticket #770: Correctly update metadata when mounting and unmounting
* Ticket #804: Move log
* Fix ticket #902: .git directories can't be checked out
* Lint fixes
* Remove unused code
* Lint fixes
* Lint fixes
* Lint fixes
* Additional logging message
* Lint fix
* Fix ticket #907: mknod fails due to path not found
* Initial implementation for commit, push, fetch.
* Don't fail on empty lines in metadata, just skip them
* Add documentation and functionality to progress hook (not used yet)
* Pylint fixes
* Ticket #908: Remove some unneeded warnings
* Fix progress report
* Ticket #906: Fix permissions on directories
* Make pylint happy
* Mount fix
* Ticket #808: Initial implementation
* Initial forgejo install
* Deduplicate key extraction
* Fix installer bugs and add documentation
* Change user to oggit
* Fix NTFS ID modification implementation
* Implement system-specific EFI data support
* Fix encoding when reading system uuid
* Fix and refactor slightly EFI implementation
* Add Windows BCD decoding tool
* Check module loading and unloading, modprobe works on oglive now
* Make EFI deployment more flexible
* Add organization API call
* Fix bash library path
* Fix repo paths for forgejo
* Update documentation
* Sync to ensure everything is written
* Refactoring and more pydoc
* Add more documentation
* Improve installer documentation
* Improve gitlib instructions
* Add missing files
* Partial setsshkey implementation
* Fix SSH key generation and extraction
* Initial package contents
* Add Debian packaging
* Add pylkid
* Add pyblkid debian files
* Use packaged pyblkid
* More detailed API logging
* Improve logging
* Add oglive key to forgejo
* Add original source
* Always re-download forgejo, even if installed.
* Remove obsolete code that stopped being relevant with Forgejo
* Move python modules to /opt/opengnsys-modules
* Use absolute paths in initrd modification
* Add timestamp to ssh key title, forgejo doesn't like duplicates
* Skip past symlinks and problems in oglive modification
* Get keys from squashfs instead of initrd to work with current oglive packaging
* Fix trivial bug
* Move modules to /usr/share/opengnsys
* Move packages to /usr/share
[ Angel Rodriguez ]
* Add gitlib/README-en.md
* Add api/README-en.md
* Add installer/README-en.md
[ Vadim Troshchinskiy ]
* Skip NTFS code on non-Windows
* Store and restore GPT partition UUIDs
* Update READMEs
* BCD constants
* Use tqdm
* Constants
* Add extra mounts update
* Better status reports
* Make log filename machine-dependent Move kernel args parsing
* Make unmounting more robust
* Improve repository initialization
* Make --pull work like the other commands
* Add packages
* Update documentation
* Ignore python cache
* Ignore more files
* add python libarchive-c original package
* Add pyblkid copyright file
* Add make_orig script
* Reorder and fix for ogrepository reorganization
* Restructure git installer to work without ogboot on the same machine, update docs
* Update english documentation
* Improve installation process, make it possible to extract keys from oglive
* Fix namespaces
* Fix ogrepository paths
* Change git repo path
* Improvements for logging and error handling
* Fix HTTP exception handling
* Improve task management, cleanup when there are too many
* More error logging
* Mark git repo as a safe directory
* Rework the ability to use a custom SSH key
* Log every request
* Branch deletion
* Make branch deletion RESTful
* Initial version of the API server
* Add original repo_api
* Convert to blueprint
* Add port argument
* Fix error handling
* Add README
* Load swagger from disk
* Fix repository URL
* Bump forgejo version
* Add helpful script
* Fix port argument
* Refactoring for package support
* Remove old code
* Refactoring for packaging
* opengnsys-forgejo package
* Fix post-install for forgejo deployment
* Fixes for running under gunicorn
* Debian packaging
* Add branches and tags creation endpoints
* Add missing file
* Rename service
* Add templates
* Disable tests
* Fix permission problem
* Fix ini path
* Update changelog
* Update changelog
* Add package files
* Add git image creation script
* Slightly improve API for ogrepo usability
* First commit
* Add installer
* Add requirements file
[ lgromero ]
* refs #734 Creates first skeleton of symfony+swagger project
[ Vadim Troshchinskiy ]
* Add Gitlib
[ lgromero ]
* refs #734 Changes OgBootBundle name and adds a first endpoint to test
* refs #734 Adds template of repository and branch endpoints
[ Vadim Troshchinskiy ]
* Update docs to account for changes
* Trivial API server
* Ticket #753: Add repository listing
* Ticket #735: List branches in repo
* Add testing instructions
* Agregar manejo de errrores
* Ticket #741: Crear repo Ticket #736: Eliminar repo
[ lgromero ]
* refs #734 Adds README for Api installation
* refs #734 Control of errores and http codes in controler
* refs #734 Renemas oggitservice
[ Vadim Troshchinskiy ]
* Ticket #738, ticket #739: repo and sync backup protoype
[ lgromero ]
* refs #734 Adds new endpoints sync and backup and status endpoint
* refs #734 Adds nelmio api doc configuration
* Adds .env file to root
* refs #734 use environment variables in .env files and disable web depuration toolbar
* refs #734 fix typo in .env and use oggit_url environment variable
[ Vadim Troshchinskiy ]
* Ticket #738, ticket #739: git sync and backup
[ Nicolas Arenas ]
* Add docker container files
[ Vadim Troshchinskiy ]
* Ticket #737: GC
* Use Paramiko and Gitpython for backups
[ Nicolas Arenas ]
* Add mock api for testing dockerfile
[ Vadim Troshchinskiy ]
* Ticket #740, listen on all hosts
[ lgromero ]
* refs #734 Removes innecesaries parameters and changes php platform to 8.2
* refs #734 just changes name and description in swagger web page
[ Vadim Troshchinskiy ]
* Remove duplicated import
* Documentation prototype
* Update to 24.04, solves deployment issue
* Add more documentation
* Add API README
* Add API examples
* Update list of package requirements in oglive
* Fix commandline parsing bug
* Revert experimental Windows change
* Fix ticket #770: Re-parse filesystems list after mounting
* Use oglive server if ogrepository is not set
* Ticket #770: Add sanity check
* Ticket #771: Correctly create directories on metadata restoration
* Ticket #780: Unmount before clone if needed
* Fix ticket #800: sudo doesn't work
[ Vadim Trochinsky ]
* Fix ticket #802: .git directory in filesystem root
[ Vadim Troshchinskiy ]
* Fix ticket #805: Remove .git directory if it already exists when checking out
* Ticket #770: Correctly update metadata when mounting and unmounting
* Ticket #804: Move log
* Fix ticket #902: .git directories can't be checked out
* Lint fixes
* Remove unused code
* Lint fixes
* Lint fixes
* Lint fixes
* Additional logging message
* Lint fix
* Fix ticket #907: mknod fails due to path not found
* Initial implementation for commit, push, fetch.
* Don't fail on empty lines in metadata, just skip them
* Add documentation and functionality to progress hook (not used yet)
* Pylint fixes
* Ticket #908: Remove some unneeded warnings
* Fix progress report
* Ticket #906: Fix permissions on directories
* Make pylint happy
* Mount fix
* Ticket #808: Initial implementation
* Initial forgejo install
* Deduplicate key extraction
* Fix installer bugs and add documentation
* Change user to oggit
* Fix NTFS ID modification implementation
* Implement system-specific EFI data support
* Fix encoding when reading system uuid
* Fix and refactor slightly EFI implementation
* Add Windows BCD decoding tool
* Check module loading and unloading, modprobe works on oglive now
* Make EFI deployment more flexible
* Add organization API call
* Fix bash library path
* Fix repo paths for forgejo
* Update documentation
* Sync to ensure everything is written
* Refactoring and more pydoc
* Add more documentation
* Improve installer documentation
* Improve gitlib instructions
* Add missing files
* Partial setsshkey implementation
* Fix SSH key generation and extraction
* Initial package contents
* Add Debian packaging
* Add pylkid
* Add pyblkid debian files
* Use packaged pyblkid
* More detailed API logging
* Improve logging
* Add oglive key to forgejo
* Add original source
* Always re-download forgejo, even if installed.
* Remove obsolete code that stopped being relevant with Forgejo
* Move python modules to /opt/opengnsys-modules
* Use absolute paths in initrd modification
* Add timestamp to ssh key title, forgejo doesn't like duplicates
* Skip past symlinks and problems in oglive modification
* Get keys from squashfs instead of initrd to work with current oglive packaging
* Fix trivial bug
* Move modules to /usr/share/opengnsys
* Move packages to /usr/share
[ Angel Rodriguez ]
* Add gitlib/README-en.md
* Add api/README-en.md
* Add installer/README-en.md
[ Vadim Troshchinskiy ]
* Skip NTFS code on non-Windows
* Store and restore GPT partition UUIDs
* Update READMEs
* BCD constants
* Use tqdm
* Constants
* Add extra mounts update
* Better status reports
* Make log filename machine-dependent Move kernel args parsing
* Make unmounting more robust
* Improve repository initialization
* Make --pull work like the other commands
* Add packages
* Update documentation
* Ignore python cache
* Ignore more files
* add python libarchive-c original package
* Add pyblkid copyright file
* Add make_orig script
* Reorder and fix for ogrepository reorganization
* Restructure git installer to work without ogboot on the same machine, update docs
* Update english documentation
* Improve installation process, make it possible to extract keys from oglive
* Fix namespaces
* Fix ogrepository paths
* Change git repo path
* Improvements for logging and error handling
* Fix HTTP exception handling
* Improve task management, cleanup when there are too many
* More error logging
* Mark git repo as a safe directory
* Rework the ability to use a custom SSH key
* Log every request
* Branch deletion
* Make branch deletion RESTful
* Initial version of the API server
* Add original repo_api
* Convert to blueprint
* Add port argument
* Fix error handling
* Add README
* Load swagger from disk
* Fix repository URL
* Bump forgejo version
* Add helpful script
* Fix port argument
* Refactoring for package support
* Remove old code
* Refactoring for packaging
* opengnsys-forgejo package
* Fix post-install for forgejo deployment
* Fixes for running under gunicorn
* Debian packaging
* Add branches and tags creation endpoints
* Add missing file
* Rename service
* Add templates
* Disable tests
* Fix permission problem
* Fix ini path
* Update changelog
* Update changelog
* Add package files
* Add git image creation script
* Slightly improve API for ogrepo usability
* Update changelog
* First commit
* Add installer
* Add requirements file
[ lgromero ]
* refs #734 Creates first skeleton of symfony+swagger project
[ Vadim Troshchinskiy ]
* Add Gitlib
[ lgromero ]
* refs #734 Changes OgBootBundle name and adds a first endpoint to test
* refs #734 Adds template of repository and branch endpoints
[ Vadim Troshchinskiy ]
* Update docs to account for changes
* Trivial API server
* Ticket #753: Add repository listing
* Ticket #735: List branches in repo
* Add testing instructions
* Agregar manejo de errrores
* Ticket #741: Crear repo Ticket #736: Eliminar repo
[ lgromero ]
* refs #734 Adds README for Api installation
* refs #734 Control of errores and http codes in controler
* refs #734 Renemas oggitservice
[ Vadim Troshchinskiy ]
* Ticket #738, ticket #739: repo and sync backup protoype
[ lgromero ]
* refs #734 Adds new endpoints sync and backup and status endpoint
* refs #734 Adds nelmio api doc configuration
* Adds .env file to root
* refs #734 use environment variables in .env files and disable web depuration toolbar
* refs #734 fix typo in .env and use oggit_url environment variable
[ Vadim Troshchinskiy ]
* Ticket #738, ticket #739: git sync and backup
[ Nicolas Arenas ]
* Add docker container files
[ Vadim Troshchinskiy ]
* Ticket #737: GC
* Use Paramiko and Gitpython for backups
[ Nicolas Arenas ]
* Add mock api for testing dockerfile
[ Vadim Troshchinskiy ]
* Ticket #740, listen on all hosts
[ lgromero ]
* refs #734 Removes innecesaries parameters and changes php platform to 8.2
* refs #734 just changes name and description in swagger web page
[ Vadim Troshchinskiy ]
* Remove duplicated import
* Documentation prototype
* Update to 24.04, solves deployment issue
* Add more documentation
* Add API README
* Add API examples
* Update list of package requirements in oglive
* Fix commandline parsing bug
* Revert experimental Windows change
* Fix ticket #770: Re-parse filesystems list after mounting
* Use oglive server if ogrepository is not set
* Ticket #770: Add sanity check
* Ticket #771: Correctly create directories on metadata restoration
* Ticket #780: Unmount before clone if needed
* Fix ticket #800: sudo doesn't work
[ Vadim Trochinsky ]
* Fix ticket #802: .git directory in filesystem root
[ Vadim Troshchinskiy ]
* Fix ticket #805: Remove .git directory if it already exists when checking out
* Ticket #770: Correctly update metadata when mounting and unmounting
* Ticket #804: Move log
* Fix ticket #902: .git directories can't be checked out
* Lint fixes
* Remove unused code
* Lint fixes
* Lint fixes
* Lint fixes
* Additional logging message
* Lint fix
* Fix ticket #907: mknod fails due to path not found
* Initial implementation for commit, push, fetch.
* Don't fail on empty lines in metadata, just skip them
* Add documentation and functionality to progress hook (not used yet)
* Pylint fixes
* Ticket #908: Remove some unneeded warnings
* Fix progress report
* Ticket #906: Fix permissions on directories
* Make pylint happy
* Mount fix
* Ticket #808: Initial implementation
* Initial forgejo install
* Deduplicate key extraction
* Fix installer bugs and add documentation
* Change user to oggit
* Fix NTFS ID modification implementation
* Implement system-specific EFI data support
* Fix encoding when reading system uuid
* Fix and refactor slightly EFI implementation
* Add Windows BCD decoding tool
* Check module loading and unloading, modprobe works on oglive now
* Make EFI deployment more flexible
* Add organization API call
* Fix bash library path
* Fix repo paths for forgejo
* Update documentation
* Sync to ensure everything is written
* Refactoring and more pydoc
* Add more documentation
* Improve installer documentation
* Improve gitlib instructions
* Add missing files
* Partial setsshkey implementation
* Fix SSH key generation and extraction
* Initial package contents
* Add Debian packaging
* Add pylkid
* Add pyblkid debian files
* Use packaged pyblkid
* More detailed API logging
* Improve logging
* Add oglive key to forgejo
* Add original source
* Always re-download forgejo, even if installed.
* Remove obsolete code that stopped being relevant with Forgejo
* Move python modules to /opt/opengnsys-modules
* Use absolute paths in initrd modification
* Add timestamp to ssh key title, forgejo doesn't like duplicates
* Skip past symlinks and problems in oglive modification
* Get keys from squashfs instead of initrd to work with current oglive packaging
* Fix trivial bug
* Move modules to /usr/share/opengnsys
* Move packages to /usr/share
[ Angel Rodriguez ]
* Add gitlib/README-en.md
* Add api/README-en.md
* Add installer/README-en.md
[ Vadim Troshchinskiy ]
* Skip NTFS code on non-Windows
* Store and restore GPT partition UUIDs
* Update READMEs
* BCD constants
* Use tqdm
* Constants
* Add extra mounts update
* Better status reports
* Make log filename machine-dependent Move kernel args parsing
* Make unmounting more robust
* Improve repository initialization
* Make --pull work like the other commands
* Add packages
* Update documentation
* Ignore python cache
* Ignore more files
* add python libarchive-c original package
* Add pyblkid copyright file
* Add make_orig script
* Reorder and fix for ogrepository reorganization
* Restructure git installer to work without ogboot on the same machine, update docs
* Update english documentation
* Improve installation process, make it possible to extract keys from oglive
* Fix namespaces
* Fix ogrepository paths
* Change git repo path
* Improvements for logging and error handling
* Fix HTTP exception handling
* Improve task management, cleanup when there are too many
* More error logging
* Mark git repo as a safe directory
* Rework the ability to use a custom SSH key
* Log every request
* Branch deletion
* Make branch deletion RESTful
* Initial version of the API server
* Add original repo_api
* Convert to blueprint
* Add port argument
* Fix error handling
* Add README
* Load swagger from disk
* Fix repository URL
* Bump forgejo version
* Add helpful script
* Fix port argument
* Refactoring for package support
* Remove old code
* Refactoring for packaging
* opengnsys-forgejo package
* Fix post-install for forgejo deployment
* Fixes for running under gunicorn
* Debian packaging
* Add branches and tags creation endpoints
* Add missing file
* Rename service
* Add templates
* Disable tests
* Fix permission problem
* Fix ini path
* Update changelog
* Update changelog
* Add package files
* Add git image creation script
* Slightly improve API for ogrepo usability
* Update changelog
* Update changelog
-- OpenGnsys <opengnsys@opengnsys.com> Mon, 16 Jun 2025 21:23:34 +0000

View File

@ -0,0 +1,29 @@
Source: opengnsys-gitinstaller
Section: unknown
Priority: optional
Maintainer: OpenGnsys <opengnsys@opengnsys.es>
Rules-Requires-Root: no
Build-Depends:
debhelper-compat (= 13),
Standards-Version: 4.6.2
Homepage: https://opengnsys.es
#Vcs-Browser: https://salsa.debian.org/debian/ogboot
#Vcs-Git: https://salsa.debian.org/debian/ogboot.git
Package: opengnsys-gitinstaller
Architecture: any
Multi-Arch: foreign
Depends:
${shlibs:Depends},
${misc:Depends},
bsdextrautils,
debconf (>= 1.5.0),
opengnsys-libarchive-c,
python3,
python3-aniso8601,
python3-git,
python3-termcolor,
python3-tqdm
Conflicts:
Description: Opengnsys installer library for OgGit
Files for OpenGnsys Git support

View File

@ -0,0 +1,43 @@
Format: https://www.debian.org/doc/packaging-manuals/copyright-format/1.0/
Source: <url://example.com>
Upstream-Name: ogboot
Upstream-Contact: <preferred name and address to reach the upstream project>
Files:
*
Copyright:
<years> <put author's name and email here>
<years> <likewise for another author>
License: GPL-3.0+
Files:
debian/*
Copyright:
2025 vagrant <vagrant@build>
License: GPL-3.0+
License: GPL-3.0+
This program is free software: you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation, either version 3 of the License, or
(at your option) any later version.
.
This package is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
.
You should have received a copy of the GNU General Public License
along with this program. If not, see <https://www.gnu.org/licenses/>.
Comment:
On Debian systems, the complete text of the GNU General
Public License version 3 can be found in "/usr/share/common-licenses/GPL-3".
# Please also look if there are files or directories which have a
# different copyright/license attached and list them here.
# Please avoid picking licenses with terms that are more restrictive than the
# packaged work, as it may make Debian's contributions unacceptable upstream.
#
# If you need, there are some extra license texts available in two places:
# /usr/share/debhelper/dh_make/licenses/
# /usr/share/common-licenses/

View File

@ -0,0 +1,2 @@
opengnsys-gitinstaller_0.5_amd64.buildinfo unknown optional
opengnsys-gitinstaller_0.5_amd64.deb unknown optional

View File

@ -0,0 +1 @@
/opt/opengnsys/ogrepository/oggit/lib

View File

@ -0,0 +1 @@
opengnsys_git_installer.py /opt/opengnsys/ogrepository/oggit/lib

View File

@ -0,0 +1,2 @@
misc:Depends=
misc:Pre-Depends=

View File

@ -0,0 +1,33 @@
#!/usr/bin/make -f
# See debhelper(7) (uncomment to enable).
# Output every command that modifies files on the build system.
#export DH_VERBOSE = 1
# See FEATURE AREAS in dpkg-buildflags(1).
#export DEB_BUILD_MAINT_OPTIONS = hardening=+all
# See ENVIRONMENT in dpkg-buildflags(1).
# Package maintainers to append CFLAGS.
#export DEB_CFLAGS_MAINT_APPEND = -Wall -pedantic
# Package maintainers to append LDFLAGS.
#export DEB_LDFLAGS_MAINT_APPEND = -Wl,--as-needed
%:
dh $@
%:
dh $@
# Ejecutar composer install durante la fase de construcción
override_dh_auto_build:
# dh_make generated override targets.
# This is an example for Cmake (see <https://bugs.debian.org/641051>).
#override_dh_auto_configure:
# dh_auto_configure -- \
# -DCMAKE_LIBRARY_PATH=$(DEB_HOST_MULTIARCH)

View File

@ -0,0 +1,11 @@
[Service]
RestartSec=10s
Type=simple
User={gitapi_user}
Group={gitapi_group}
WorkingDirectory={gitapi_work_path}
ExecStart=/usr/bin/gunicorn -w 4 -b {gitapi_host}:{gitapi_port} gitapi:app
Restart=always
[Install]
WantedBy=multi-user.target

View File

@ -0,0 +1,31 @@
#!/bin/bash
set -e
if [ ! -f "/etc/apt/sources.list.d/opengnsys.sources" ] ; then
cat > /etc/apt/sources.list.d/opengnsys.sources <<HERE
Types: deb
URIs: https://ognproject.evlt.uma.es/debian-opengnsys/opengnsys
Suites: noble
Components: main
Signed-By:
-----BEGIN PGP PUBLIC KEY BLOCK-----
.
mDMEZzx/SxYJKwYBBAHaRw8BAQdAa83CuAJ5/+7Pn9LHT/k34EAGpx5FnT/ExHSj
XZG1JES0Ik9wZW5HbnN5cyA8b3Blbmduc3lzQG9wZW5nbnN5cy5lcz6ImQQTFgoA
QRYhBC+J38Xsso227ZbDVt2S5xJQRhKDBQJnPH9LAhsDBQkFo5qABQsJCAcCAiIC
BhUKCQgLAgQWAgMBAh4HAheAAAoJEN2S5xJQRhKDW/MBAO6swnpwdrbm48ypMyPh
NboxvF7rCqBqHWwRHvkvrq7pAP9zd98r7z2AvqVXZxnaCsLTUNMEL12+DVZAUZ1G
EquRBbg4BGc8f0sSCisGAQQBl1UBBQEBB0B6D6tkrwXSHi7ebGYsiMPntqwdkQ/S
84SFTlSxRqdXfgMBCAeIfgQYFgoAJhYhBC+J38Xsso227ZbDVt2S5xJQRhKDBQJn
PH9LAhsMBQkFo5qAAAoJEN2S5xJQRhKDJ+cBAM9jYbeq5VXkHLfODeVztgSXnSUe
yklJ18oQmpeK5eWeAQDKYk/P0R+1ZJDItxkeP6pw62bCDYGQDvdDGPMAaIT6CA==
=xcNc
-----END PGP PUBLIC KEY BLOCK-----
HERE
fi
apt update
apt install -y python3-git opengnsys-libarchive-c python3-termcolor python3-requests python3-tqdm bsdextrautils

View File

@ -28,13 +28,28 @@ import requests
import tempfile
import hashlib
import datetime
import tqdm
#FORGEJO_VERSION="8.0.3"
FORGEJO_VERSION="9.0.0"
FORGEJO_VERSION="10.0.3"
FORGEJO_URL=f"https://codeberg.org/forgejo/forgejo/releases/download/v{FORGEJO_VERSION}/forgejo-{FORGEJO_VERSION}-linux-amd64"
def download_with_progress(url, output_file):
with requests.get(url, stream=True, timeout=60) as req:
progress = tqdm.tqdm()
progress.total = int(req.headers["Content-Length"])
progress.unit_scale = True
progress.desc = "Downloading"
for chunk in req.iter_content(chunk_size=8192):
output_file.write(chunk)
progress.n = progress.n + len(chunk)
progress.refresh()
progress.close()
def show_error(*args):
"""
@ -65,6 +80,23 @@ class RequirementException(Exception):
super().__init__(message)
self.message = message
class OptionalDependencyException(Exception):
"""Excepción que indica que nos falta algún requisito opcional
Attributes:
message (str): Mensaje de error mostrado al usuario
"""
def __init__(self, message):
"""Inicializar OptionalDependencyException.
Args:
message (str): Mensaje de error mostrado al usuario
"""
super().__init__(message)
self.message = message
class FakeTemporaryDirectory:
"""Imitación de TemporaryDirectory para depuración"""
def __init__(self, dirname):
@ -74,6 +106,57 @@ class FakeTemporaryDirectory:
def __str__(self):
return self.name
class OgliveMounter:
"""
A class to handle mounting of Oglive images from a given URL or local file.
Attributes:
logger (logging.Logger): Logger instance for logging messages.
squashfs (str): Path to the squashfs file within the mounted Oglive image.
initrd (str): Path to the initrd image within the mounted Oglive image.
kernel (str): Path to the kernel image within the mounted Oglive image.
Methods:
__init__(url):
Initializes the OgliveMounter instance, downloads the Oglive image if URL is provided,
and mounts the image to a temporary directory.
__del__():
Unmounts the mounted directory and cleans up resources.
"""
def __init__(self, url):
self.logger = logging.getLogger("OgliveMounter")
self.mountdir = tempfile.TemporaryDirectory()
self.logger.info("Will mount oglive found at %s", url)
if url.startswith("http://") or url.startswith("https://"):
self.logger.debug("We got an URL, downloading %s", url)
self.tempfile = tempfile.NamedTemporaryFile(mode='wb')
filename = self.tempfile.name
download_with_progress(url, self.tempfile)
else:
self.logger.debug("We got a filename")
filename = url
self.logger.debug("Mounting %s at %s", filename, self.mountdir.name)
subprocess.run(["/usr/bin/mount", filename, self.mountdir.name], check=True)
self.squashfs = os.path.join(self.mountdir.name, "ogclient", "ogclient.sqfs")
self.initrd = os.path.join(self.mountdir.name, "ogclient", "oginitrd.img")
self.kernel = os.path.join(self.mountdir.name, "ogclient", "ogvmlinuz")
def __del__(self):
self.logger.debug("Unmounting directory %s", self.mountdir.name)
subprocess.run(["/usr/bin/umount", self.mountdir.name], check=True)
class Oglive:
"""Interfaz a utilidad oglivecli
@ -88,6 +171,10 @@ class Oglive:
def _cmd(self, args):
cmd = [self.binary] + args
if not os.path.exists(self.binary):
raise OptionalDependencyException("Missing oglivecli command. Please use --squashfs-file (see README.md for more details)")
self.__logger.debug("comando: %s", cmd)
proc = subprocess.run(cmd, shell=False, check=True, capture_output=True)
@ -122,19 +209,46 @@ class OpengnsysGitInstaller:
self.__logger.debug("Inicializando")
self.testmode = False
self.base_path = "/opt/opengnsys"
self.ogrepository_base_path = os.path.join(self.base_path, "ogrepository")
self.git_basedir = "base.git"
self.email = "OpenGnsys@opengnsys.com"
self.opengnsys_bin_path = os.path.join(self.base_path, "bin")
self.opengnsys_etc_path = os.path.join(self.base_path, "etc")
self.forgejo_user = "oggit"
self.forgejo_password = "opengnsys"
self.forgejo_organization = "opengnsys"
self.forgejo_port = 3000
self.forgejo_port = 3100
self.forgejo_bin_path = os.path.join(self.ogrepository_base_path, "bin")
self.forgejo_exe = os.path.join(self.forgejo_bin_path, "forgejo")
self.forgejo_conf_dir_path = os.path.join(self.ogrepository_base_path, "etc", "forgejo")
self.lfs_dir_path = os.path.join(self.ogrepository_base_path, "oggit", "git-lfs")
self.git_dir_path = os.path.join(self.ogrepository_base_path, "oggit", "git")
self.forgejo_var_dir_path = os.path.join(self.ogrepository_base_path, "var", "lib", "forgejo")
self.forgejo_work_dir_path = os.path.join(self.forgejo_var_dir_path, "work")
self.forgejo_work_custom_dir_path = os.path.join(self.forgejo_work_dir_path, "custom")
self.forgejo_db_dir_path = os.path.join(self.forgejo_var_dir_path, "db")
self.forgejo_data_dir_path = os.path.join(self.forgejo_var_dir_path, "data")
self.forgejo_db_path = os.path.join(self.forgejo_db_dir_path, "forgejo.db")
self.forgejo_log_dir_path = os.path.join(self.ogrepository_base_path, "log", "forgejo")
self.dependencies = ["git", "python3-flask", "python3-flasgger", "gunicorn", ]
self.set_ssh_user_group("oggit", "oggit")
self.temp_dir = None
self.script_path = os.path.realpath(os.path.dirname(__file__))
# Where we look for forgejo-app.ini and similar templates.
self.template_path = self.script_path
# Possible names for SSH public keys
self.ssh_key_users = ["root", "opengnsys"]
self.key_names = ["id_rsa.pub", "id_ed25519.pub", "id_ecdsa.pub", "id_ed25519_sk.pub", "id_ecdsa_sk.pub"]
@ -147,10 +261,14 @@ class OpengnsysGitInstaller:
for kp in self.key_paths:
self.key_paths_dict[kp] = 1
os.environ["PATH"] += os.pathsep + os.path.join(self.base_path, "bin")
self.oglive = Oglive()
def set_testmode(self, value):
"""Establece el modo de prueba"""
self.testmode = value
@ -159,10 +277,6 @@ class OpengnsysGitInstaller:
"""Ignorar requisito de clave de ssh para el instalador"""
self.ignoresshkey = value
def set_usesshkey(self, value):
"""Usar clave de ssh especificada"""
self.usesshkey = value
def set_basepath(self, value):
"""Establece ruta base de OpenGnsys
Valor por defecto: /opt/opengnsys
@ -218,7 +332,7 @@ class OpengnsysGitInstaller:
def init_git_repo(self, reponame):
"""Inicializa un repositorio Git"""
# Creamos repositorio
ogdir_images = os.path.join(self.base_path, "images")
ogdir_images = os.path.join(self.ogrepository_base_path, "oggit")
self.__logger.info("Creando repositorio de GIT %s", reponame)
os.makedirs(os.path.join(ogdir_images, self.git_basedir), exist_ok=True)
@ -294,42 +408,60 @@ class OpengnsysGitInstaller:
raise TimeoutError("Timed out waiting for connection!")
def add_ssh_key_from_squashfs(self, oglive_num = None):
def add_ssh_key_from_squashfs(self, oglive_num = None, squashfs_file = None, oglive_file = None):
name = "(unknown)"
mounter = None
if not oglive_file is None:
mounter = OgliveMounter(oglive_file)
squashfs_file = mounter.squashfs
if squashfs_file is None:
if oglive_num is None:
self.__logger.info("Using default oglive")
oglive_num = self.oglive.get_default()
else:
self.__logger.info("Using oglive %i", oglive_num)
name = self.oglive.get_clients()[str(oglive_num)]
if oglive_num is None:
self.__logger.info("Using default oglive")
oglive_num = int(self.oglive.get_default())
else:
self.__logger.info("Using oglive %i", oglive_num)
self.__logger.info("Using specified squashfs file %s", squashfs_file)
name = os.path.basename(squashfs_file)
oglive_client = self.oglive.get_clients()[str(oglive_num)]
self.__logger.info("Oglive is %s", oglive_client)
keys = installer.extract_ssh_keys(oglive_num = oglive_num)
keys = self.extract_ssh_keys_from_squashfs(oglive_num = oglive_num, squashfs_file=squashfs_file)
retvals = []
for k in keys:
timestamp = '{:%Y-%m-%d %H:%M:%S}'.format(datetime.datetime.now())
installer.add_forgejo_sshkey(k, f"Key for {oglive_client} ({timestamp})")
retvals = retvals + [self.add_forgejo_sshkey(k, f"Key for {name} ({timestamp})")]
return retvals
def extract_ssh_keys(self, oglive_num = None):
def extract_ssh_keys_from_squashfs(self, oglive_num = None, squashfs_file = None):
public_keys = []
squashfs = "ogclient.sqfs"
tftp_dir = os.path.join(self.base_path, "tftpboot")
if squashfs_file is None:
tftp_dir = os.path.join(self.base_path, "tftpboot")
if oglive_num is None:
self.__logger.info("Reading from default oglive")
oglive_num = self.oglive.get_default()
if oglive_num is None:
self.__logger.info("Reading from default oglive")
oglive_num = self.oglive.get_default()
else:
self.__logger.info("Reading from oglive %i", oglive_num)
oglive_client = self.oglive.get_clients()[str(oglive_num)]
self.__logger.info("Oglive is %s", oglive_client)
client_squashfs_path = os.path.join(tftp_dir, oglive_client, squashfs)
else:
self.__logger.info("Reading from oglive %i", oglive_num)
oglive_client = self.oglive.get_clients()[str(oglive_num)]
self.__logger.info("Oglive is %s", oglive_client)
client_squashfs_path = os.path.join(tftp_dir, oglive_client, squashfs)
self.__logger.info("Using specified squashfs file %s", squashfs_file)
client_squashfs_path = squashfs_file
self.__logger.info("Mounting %s", client_squashfs_path)
mount_tempdir = tempfile.TemporaryDirectory()
@ -352,49 +484,75 @@ class OpengnsysGitInstaller:
return public_keys
def _extract_ssh_key_from_initrd(self):
def extract_ssh_key_from_initrd(self, oglive_number = None, initrd_file = None):
public_key=""
INITRD = "oginitrd.img"
tftp_dir = os.path.join(self.base_path, "tftpboot")
default_num = self.oglive.get_default()
default_client = self.oglive.get_clients()[default_num]
client_initrd_path = os.path.join(tftp_dir, default_client, INITRD)
self.__logger.debug("Extracting ssh key from initrd")
#self.temp_dir = self._get_tempdir()
if initrd_file is None:
self.__logger.debug("Looking for initrd file")
tftp_dir = os.path.join(self.base_path, "tftpboot")
if oglive_number is None:
oglive_number = self.oglive.get_default()
if self.usesshkey:
with open(self.usesshkey, 'r') as f:
public_key = f.read().strip()
oglive_client = self.oglive.get_clients()[oglive_number]
client_initrd_path = os.path.join(tftp_dir, oglive_client, INITRD)
self.__logger.debug("Found at %s", client_initrd_path)
else:
if os.path.isfile(client_initrd_path):
#os.makedirs(temp_dir, exist_ok=True)
#os.chdir(self.temp_dir.name)
self.__logger.debug("Descomprimiendo %s", client_initrd_path)
public_key = None
with libarchive.file_reader(client_initrd_path) as initrd:
for file in initrd:
self.__logger.debug("Archivo: %s", file)
self.__logger.debug("Using provided initrd file %s", initrd_file)
client_initrd_path = initrd_file
pathname = file.pathname;
if pathname.startswith("./"):
pathname = pathname[2:]
self.__logger.debug("Extracting key from %s", client_initrd_path)
if pathname in self.key_paths_dict:
data = bytearray()
for block in file.get_blocks():
data = data + block
public_key = data.decode('utf-8').strip()
if os.path.isfile(client_initrd_path):
#os.makedirs(temp_dir, exist_ok=True)
#os.chdir(self.temp_dir.name)
self.__logger.debug("Uncompressing %s", client_initrd_path)
public_key = None
with libarchive.file_reader(client_initrd_path) as initrd:
for file in initrd:
self.__logger.debug("File: %s", file)
break
else:
print(f"No se encuentra la imagen de initrd {client_initrd_path}")
exit(2)
pathname = file.pathname;
if pathname.startswith("./"):
pathname = pathname[2:]
if pathname in self.key_paths_dict:
self.__logger.info("Found key %s, extracting", pathname)
data = bytearray()
for block in file.get_blocks():
data = data + block
public_key = data.decode('utf-8').strip()
break
else:
print(f"Failed to find initrd at {client_initrd_path}")
exit(2)
if not public_key:
self.__logger.warning("Failed to find a SSH key")
return public_key
def get_image_paths(self, oglive_num = None):
squashfs = "ogclient.sqfs"
if oglive_num is None:
self.__logger.info("Will modify default client")
oglive_num = self.oglive.get_default()
tftp_dir = os.path.join(self.base_path, "tftpboot")
oglive_client = self.oglive.get_clients()[str(oglive_num)]
client_squashfs_path = os.path.join(tftp_dir, oglive_client, squashfs)
self.__logger.info("Squashfs: %s", client_squashfs_path)
def set_ssh_key_in_initrd(self, client_num = None):
INITRD = "oginitrd.img"
@ -534,7 +692,25 @@ class OpengnsysGitInstaller:
self.add_forgejo_sshkey(oglive_public_key, f"Key for {ogclient} ({timestamp})")
def install(self):
def verify_requirements(self):
self.__logger.info("verify_requirements()")
# Control básico de errores.
self.__logger.debug("Comprobando euid")
if os.geteuid() != 0:
raise RequirementException("Sólo ejecutable por root")
if not os.path.exists("/etc/debian_version"):
raise RequirementException("Instalación sólo soportada en Debian y Ubuntu")
MIN_PYTHON = (3, 8)
if sys.version_info < MIN_PYTHON:
raise RequirementException(f"Python %s.%s mínimo requerido.\n" % MIN_PYTHON)
def install_dependencies(self):
"""Instalar
Ejecuta todo el proceso de instalación incluyendo:
@ -551,32 +727,11 @@ class OpengnsysGitInstaller:
"""
self.__logger.info("install()")
ogdir_images = os.path.join(self.base_path, "images")
ENGINECFG = os.path.join(self.base_path, "client/etc/engine.cfg")
os.environ["PATH"] += os.pathsep + os.path.join(self.base_path, "bin")
tftp_dir = os.path.join(self.base_path, "tftpboot")
INITRD = "oginitrd.img"
self.temp_dir = self._get_tempdir()
SSHUSER = "opengnsys"
self.verify_requirements()
# Control básico de errores.
self.__logger.debug("Comprobando euid")
if os.geteuid() != 0:
raise RequirementException("Sólo ejecutable por root")
if not os.path.exists("/etc/debian_version"):
raise RequirementException("Instalación sólo soportada en Debian y Ubuntu")
MIN_PYTHON = (3, 8)
if sys.version_info < MIN_PYTHON:
raise RequirementException(f"Python %s.%s mínimo requerido.\n" % MIN_PYTHON)
self.__logger.debug("Instalando dependencias")
subprocess.run(["apt-get", "install", "-y", "git"], check=True)
self.__logger.debug("Installing dependencies")
subprocess.run(["apt-get", "install", "-y"] + self.dependencies, check=True)
def _install_template(self, template, destination, keysvalues):
@ -587,7 +742,10 @@ class OpengnsysGitInstaller:
data = template_file.read()
for key in keysvalues.keys():
data = data.replace("{" + key + "}", keysvalues[key])
if isinstance(keysvalues[key], int):
data = data.replace("{" + key + "}", str(keysvalues[key]))
else:
data = data.replace("{" + key + "}", keysvalues[key])
with open(destination, "w+", encoding="utf-8") as out_file:
out_file.write(data)
@ -598,88 +756,112 @@ class OpengnsysGitInstaller:
ret = subprocess.run(cmd, check=True,capture_output=True, encoding='utf-8')
return ret.stdout.strip()
def install_forgejo(self):
self.__logger.info("Installing Forgejo")
def install_api(self):
self.__logger.info("Installing Git API")
opengnsys_bin_path = os.path.join(self.base_path, "bin")
opengnsys_etc_path = os.path.join(self.base_path, "etc")
pathlib.Path(opengnsys_bin_path).mkdir(parents=True, exist_ok=True)
data = {
"gitapi_user" : "opengnsys",
"gitapi_group" : "opengnsys",
"gitapi_host" : "0.0.0.0",
"gitapi_port" : 8087,
"gitapi_work_path" : opengnsys_bin_path
}
shutil.copy("../api/gitapi.py", opengnsys_bin_path + "/gitapi.py")
shutil.copy("opengnsys_git_installer.py", opengnsys_bin_path + "/opengnsys_git_installer.py")
self._install_template(os.path.join(self.template_path, "gitapi.service"), "/etc/systemd/system/gitapi.service", data)
bin_path = os.path.join(self.base_path, "bin", "forgejo")
conf_dir_path = os.path.join(self.base_path, "etc", "forgejo")
self.__logger.debug("Reloading systemd and starting service")
subprocess.run(["systemctl", "daemon-reload"], check=True)
subprocess.run(["systemctl", "enable", "gitapi"], check=True)
subprocess.run(["systemctl", "restart", "gitapi"], check=True)
lfs_dir_path = os.path.join(self.base_path, "images", "git-lfs")
git_dir_path = os.path.join(self.base_path, "images", "git")
def _get_forgejo_data(self):
conf_path = os.path.join(self.forgejo_conf_dir_path, "app.ini")
forgejo_work_dir_path = os.path.join(self.base_path, "var", "lib", "forgejo/work")
forgejo_db_dir_path = os.path.join(self.base_path, "var", "lib", "forgejo/db")
forgejo_data_dir_path = os.path.join(self.base_path, "var", "lib", "forgejo/data")
data = {
"forgejo_user" : self.ssh_user,
"forgejo_group" : self.ssh_group,
"forgejo_port" : str(self.forgejo_port),
"forgejo_bin" : self.forgejo_exe,
"forgejo_app_ini" : conf_path,
"forgejo_work_path" : self.forgejo_work_dir_path,
"forgejo_data_path" : self.forgejo_data_dir_path,
"forgejo_db_path" : self.forgejo_db_path,
"forgejo_repository_root" : self.git_dir_path,
"forgejo_lfs_path" : self.lfs_dir_path,
"forgejo_log_path" : self.forgejo_log_dir_path,
"forgejo_hostname" : self._runcmd("hostname"),
"forgejo_lfs_jwt_secret" : self._runcmd([self.forgejo_exe,"generate", "secret", "LFS_JWT_SECRET"]),
"forgejo_jwt_secret" : self._runcmd([self.forgejo_exe,"generate", "secret", "JWT_SECRET"]),
"forgejo_internal_token" : self._runcmd([self.forgejo_exe,"generate", "secret", "INTERNAL_TOKEN"]),
"forgejo_secret_key" : self._runcmd([self.forgejo_exe,"generate", "secret", "SECRET_KEY"])
}
forgejo_db_path = os.path.join(forgejo_db_dir_path, "forgejo.db")
return data
forgejo_log_dir_path = os.path.join(self.base_path, "log", "forgejo")
def install_forgejo(self, download=True):
self.__logger.info("Installing Forgejo version %s", FORGEJO_VERSION)
conf_path = os.path.join(self.forgejo_conf_dir_path, "app.ini")
conf_path = os.path.join(conf_dir_path, "app.ini")
self.__logger.info("Stopping opengnsys-forgejo service. This may cause a harmless warning.")
self.__logger.debug("Stopping opengnsys-forgejo service")
subprocess.run(["systemctl", "stop", "opengnsys-forgejo"], check=False)
subprocess.run(["/usr/bin/systemctl", "stop", "opengnsys-forgejo"], check=False)
self.__logger.debug("Downloading from %s into %s", FORGEJO_URL, bin_path)
urllib.request.urlretrieve(FORGEJO_URL, bin_path)
os.chmod(bin_path, 0o755)
self.__logger.debug("Downloading from %s into %s", FORGEJO_URL, self.forgejo_exe)
pathlib.Path(self.forgejo_bin_path).mkdir(parents=True, exist_ok=True)
if os.path.exists(forgejo_db_path):
with open(self.forgejo_exe, "wb") as forgejo_bin:
download_with_progress(FORGEJO_URL, forgejo_bin)
os.chmod(self.forgejo_exe, 0o755)
if os.path.exists(self.forgejo_db_path):
self.__logger.debug("Removing old configuration")
os.unlink(forgejo_db_path)
os.unlink(self.forgejo_db_path)
else:
self.__logger.debug("Old configuration not present, ok.")
self.__logger.debug("Wiping old data")
for dir in [conf_dir_path, git_dir_path, lfs_dir_path, forgejo_work_dir_path, forgejo_data_dir_path, forgejo_db_dir_path]:
for dir in [self.forgejo_conf_dir_path, self.git_dir_path, self.lfs_dir_path, self.forgejo_work_dir_path, self.forgejo_data_dir_path, self.forgejo_db_dir_path]:
if os.path.exists(dir):
self.__logger.debug("Removing %s", dir)
shutil.rmtree(dir)
self.__logger.debug("Creating directories")
pathlib.Path(conf_dir_path).mkdir(parents=True, exist_ok=True)
pathlib.Path(git_dir_path).mkdir(parents=True, exist_ok=True)
pathlib.Path(lfs_dir_path).mkdir(parents=True, exist_ok=True)
pathlib.Path(forgejo_work_dir_path).mkdir(parents=True, exist_ok=True)
pathlib.Path(forgejo_data_dir_path).mkdir(parents=True, exist_ok=True)
pathlib.Path(forgejo_db_dir_path).mkdir(parents=True, exist_ok=True)
pathlib.Path(forgejo_log_dir_path).mkdir(parents=True, exist_ok=True)
pathlib.Path(self.opengnsys_etc_path).mkdir(parents=True, exist_ok=True)
pathlib.Path(self.forgejo_conf_dir_path).mkdir(parents=True, exist_ok=True)
pathlib.Path(self.git_dir_path).mkdir(parents=True, exist_ok=True)
pathlib.Path(self.lfs_dir_path).mkdir(parents=True, exist_ok=True)
pathlib.Path(self.forgejo_work_dir_path).mkdir(parents=True, exist_ok=True)
pathlib.Path(self.forgejo_data_dir_path).mkdir(parents=True, exist_ok=True)
pathlib.Path(self.forgejo_db_dir_path).mkdir(parents=True, exist_ok=True)
pathlib.Path(self.forgejo_log_dir_path).mkdir(parents=True, exist_ok=True)
os.chown(lfs_dir_path, self.ssh_uid, self.ssh_gid)
os.chown(git_dir_path, self.ssh_uid, self.ssh_gid)
os.chown(forgejo_data_dir_path, self.ssh_uid, self.ssh_gid)
os.chown(forgejo_work_dir_path, self.ssh_uid, self.ssh_gid)
os.chown(forgejo_db_dir_path, self.ssh_uid, self.ssh_gid)
os.chown(forgejo_log_dir_path, self.ssh_uid, self.ssh_gid)
os.chown(self.lfs_dir_path, self.ssh_uid, self.ssh_gid)
os.chown(self.git_dir_path, self.ssh_uid, self.ssh_gid)
os.chown(self.forgejo_data_dir_path, self.ssh_uid, self.ssh_gid)
os.chown(self.forgejo_work_dir_path, self.ssh_uid, self.ssh_gid)
os.chown(self.forgejo_db_dir_path, self.ssh_uid, self.ssh_gid)
os.chown(self.forgejo_log_dir_path, self.ssh_uid, self.ssh_gid)
data = {
"forgejo_user" : self.ssh_user,
"forgejo_group" : self.ssh_group,
"forgejo_port" : str(self.forgejo_port),
"forgejo_bin" : bin_path,
"forgejo_app_ini" : conf_path,
"forgejo_work_path" : forgejo_work_dir_path,
"forgejo_data_path" : forgejo_data_dir_path,
"forgejo_db_path" : forgejo_db_path,
"forgejo_repository_root" : git_dir_path,
"forgejo_lfs_path" : lfs_dir_path,
"forgejo_log_path" : forgejo_log_dir_path,
"forgejo_hostname" : self._runcmd("hostname"),
"forgejo_lfs_jwt_secret" : self._runcmd([bin_path,"generate", "secret", "LFS_JWT_SECRET"]),
"forgejo_jwt_secret" : self._runcmd([bin_path,"generate", "secret", "JWT_SECRET"]),
"forgejo_internal_token" : self._runcmd([bin_path,"generate", "secret", "INTERNAL_TOKEN"]),
"forgejo_secret_key" : self._runcmd([bin_path,"generate", "secret", "SECRET_KEY"])
}
data = self._get_forgejo_data()
self._install_template(os.path.join(self.script_path, "forgejo-app.ini"), conf_path, data)
self._install_template(os.path.join(self.script_path, "forgejo.service"), "/etc/systemd/system/opengnsys-forgejo.service", data)
self._install_template(os.path.join(self.template_path, "forgejo-app.ini"), conf_path, data)
self._install_template(os.path.join(self.template_path, "opengnsys-forgejo.service"), "/etc/systemd/system/opengnsys-forgejo.service", data)
self.__logger.debug("Reloading systemd and starting service")
@ -694,7 +876,7 @@ class OpengnsysGitInstaller:
self.__logger.info("Configuring forgejo")
def run_forge_cmd(args):
cmd = [bin_path, "--config", conf_path] + args
cmd = [self.forgejo_exe, "--config", conf_path] + args
self.__logger.debug("Running command: %s", cmd)
ret = subprocess.run(cmd, check=False, capture_output=True, encoding='utf-8', user=self.ssh_user)
@ -715,10 +897,80 @@ class OpengnsysGitInstaller:
with open(os.path.join(self.base_path, "etc", "ogGitApiToken.cfg"), "w+", encoding='utf-8') as token_file:
token_file.write(token)
def configure_forgejo(self):
data = self._get_forgejo_data()
self.__logger.debug("Creating directories")
ssh_key = self._extract_ssh_key_from_initrd()
pathlib.Path(self.opengnsys_etc_path).mkdir(parents=True, exist_ok=True)
pathlib.Path(self.forgejo_conf_dir_path).mkdir(parents=True, exist_ok=True)
pathlib.Path(self.git_dir_path).mkdir(parents=True, exist_ok=True)
pathlib.Path(self.lfs_dir_path).mkdir(parents=True, exist_ok=True)
pathlib.Path(self.forgejo_work_dir_path).mkdir(parents=True, exist_ok=True)
pathlib.Path(self.forgejo_work_custom_dir_path).mkdir(parents=True, exist_ok=True)
pathlib.Path(self.forgejo_data_dir_path).mkdir(parents=True, exist_ok=True)
pathlib.Path(self.forgejo_db_dir_path).mkdir(parents=True, exist_ok=True)
pathlib.Path(self.forgejo_log_dir_path).mkdir(parents=True, exist_ok=True)
self.add_forgejo_sshkey(ssh_key, "Default key")
os.chown(self.lfs_dir_path, self.ssh_uid, self.ssh_gid)
os.chown(self.git_dir_path, self.ssh_uid, self.ssh_gid)
os.chown(self.forgejo_data_dir_path, self.ssh_uid, self.ssh_gid)
os.chown(self.forgejo_work_dir_path, self.ssh_uid, self.ssh_gid)
os.chown(self.forgejo_db_dir_path, self.ssh_uid, self.ssh_gid)
os.chown(self.forgejo_log_dir_path, self.ssh_uid, self.ssh_gid)
conf_path = os.path.join(self.forgejo_conf_dir_path, "app.ini")
self._install_template(os.path.join(self.template_path, "forgejo-app.ini"), conf_path, data)
self._install_template(os.path.join(self.template_path, "opengnsys-forgejo.service"), "/etc/systemd/system/opengnsys-forgejo.service", data)
self.__logger.debug("Reloading systemd and starting service")
subprocess.run(["systemctl", "daemon-reload"], check=True)
subprocess.run(["systemctl", "enable", "opengnsys-forgejo"], check=True)
subprocess.run(["systemctl", "restart", "opengnsys-forgejo"], check=True)
self.__logger.info("Waiting for forgejo to start")
self._wait_for_port("localhost", self.forgejo_port)
self.__logger.info("Configuring forgejo")
def run_forge_cmd(args, ignore_errors = []):
cmd = [self.forgejo_exe, "--config", conf_path] + args
self.__logger.info("Running command: %s", cmd)
ret = subprocess.run(cmd, check=False, capture_output=True, encoding='utf-8', user=self.ssh_user)
if ret.returncode == 0:
return ret.stdout.strip()
else:
self.__logger.error("Failed to run command: %s, return code %i", cmd, ret.returncode)
self.__logger.error("stdout: %s", ret.stdout.strip())
self.__logger.error("stderr: %s", ret.stderr.strip())
for err in ignore_errors:
if err in ret.stderr:
self.__logger.info("Ignoring error, it's in the ignore list")
return ret.stdout.strip()
raise RuntimeError("Failed to run necessary command")
run_forge_cmd(["migrate"])
run_forge_cmd(["admin", "doctor", "check"])
run_forge_cmd(["admin", "user", "create", "--username", self.forgejo_user, "--password", self.forgejo_password, "--email", self.email], ignore_errors=["user already exists"])
token = run_forge_cmd(["admin", "user", "generate-access-token", "--username", self.forgejo_user, "-t", "gitapi", "--scopes", "all", "--raw"], ignore_errors = ["access token name has been used already"])
if token:
with open(os.path.join(self.base_path, "etc", "ogGitApiToken.cfg"), "w+", encoding='utf-8') as token_file:
token_file.write(token)
else:
self.__logger.info("Keeping the old token")
def add_forgejo_repo(self, repository_name, description = ""):
@ -764,6 +1016,7 @@ class OpengnsysGitInstaller:
)
self.__logger.info("Request status was %i, content %s", r.status_code, r.content)
return r.status_code, r.content.decode('utf-8')
def add_forgejo_organization(self, pubkey, description = ""):
token = ""
@ -799,8 +1052,7 @@ if __name__ == '__main__':
streamLog = logging.StreamHandler()
streamLog.setLevel(logging.INFO)
if not os.path.exists(opengnsys_log_dir):
os.mkdir(opengnsys_log_dir)
pathlib.Path(opengnsys_log_dir).mkdir(parents=True, exist_ok=True)
logFilePath = f"{opengnsys_log_dir}/git_installer.log"
fileLog = logging.FileHandler(logFilePath)
@ -815,6 +1067,25 @@ if __name__ == '__main__':
logger.addHandler(fileLog)
if "postinst" in os.path.basename(__file__):
logger.info("Running as post-install script")
installer=OpengnsysGitInstaller()
logger.debug("Obtaining configuration from debconf")
import debconf
with debconf.Debconf(run_frontend=True) as db:
installer.forgejo_organization = db.get('opengnsys/forgejo_organization')
installer.forgejo_user = db.get('opengnsys/forgejo_user')
installer.forgejo_password = db.get('opengnsys/forgejo_password')
installer.email = db.get('opengnsys/forgejo_email')
installer.forgejo_port = int(db.get('opengnsys/forgejo_port'))
# Templates get installed here
installer.template_path = "/usr/share/opengnsys-forgejo/"
installer.configure_forgejo()
sys.exit(0)
parser = argparse.ArgumentParser(
prog="OpenGnsys Installer",
description="Script para la instalación del repositorio git",
@ -824,15 +1095,23 @@ if __name__ == '__main__':
parser.add_argument('--testmode', action='store_true', help="Modo de prueba")
parser.add_argument('--ignoresshkey', action='store_true', help="Ignorar clave de SSH")
parser.add_argument('--usesshkey', type=str, help="Usar clave SSH especificada")
parser.add_argument('--use-ssh-key', metavar="FILE", type=str, help="Add the SSH key from the specified file")
parser.add_argument('--test-createuser', action='store_true')
parser.add_argument('--extract-ssh-key', action='store_true', help="Extract SSH key from oglive squashfs")
parser.add_argument('--set-ssh-key', action='store_true', help="Read SSH key from oglive squashfs and set it in Forgejo")
parser.add_argument('--extract-ssh-key-from-initrd', action='store_true', help="Extract SSH key from oglive initrd (obsolete)")
parser.add_argument('--initrd-file', metavar="FILE", help="Initrd file to extract SSH key from")
parser.add_argument('--squashfs-file', metavar="FILE", help="Squashfs file to extract SSH key from")
parser.add_argument('--oglive-file', metavar="FILE", help="Oglive file (ISO) to extract SSH key from")
parser.add_argument('--oglive-url', metavar="URL", help="URL to oglive file (ISO) to extract SSH key from")
parser.add_argument('--set-ssh-key-in-initrd', action='store_true', help="Configure SSH key in oglive (obsolete)")
parser.add_argument('--oglive', type=int, metavar='NUM', help = "Do SSH key manipulation on this oglive")
parser.add_argument('--quiet', action='store_true', help="Quiet console output")
parser.add_argument('--get-image-paths', action='store_true', help="Get paths to image files")
parser.add_argument("-v", "--verbose", action="store_true", help = "Verbose console output")
@ -848,7 +1127,6 @@ if __name__ == '__main__':
installer = OpengnsysGitInstaller()
installer.set_testmode(args.testmode)
installer.set_ignoresshkey(args.ignoresshkey)
installer.set_usesshkey(args.usesshkey)
logger.debug("Inicio de instalación")
@ -860,25 +1138,40 @@ if __name__ == '__main__':
elif args.test_createuser:
installer.set_ssh_user_group("oggit2", "oggit2")
elif args.extract_ssh_key:
keys = installer.extract_ssh_keys(oglive_num = args.oglive)
keys = installer.extract_ssh_keys_from_squashfs(oglive_num = args.oglive)
print(f"{keys}")
elif args.extract_ssh_key_from_initrd:
key = installer._extract_ssh_key_from_initrd()
key = installer.extract_ssh_key_from_initrd(oglive_number = args.oglive, initrd_file = args.initrd_file)
print(f"{key}")
elif args.set_ssh_key:
installer.add_ssh_key_from_squashfs(oglive_num=args.oglive)
installer.add_ssh_key_from_squashfs(oglive_num=args.oglive, squashfs_file=args.squashfs_file, oglive_file = args.oglive_file or args.oglive_url)
elif args.use_ssh_key:
with open(args.use_ssh_key, 'r', encoding='utf-8') as ssh_key_file:
ssh_key_data = ssh_key_file.read().strip()
(keytype, keydata, description) = ssh_key_data.split(" ", 2)
installer.add_forgejo_sshkey(f"{keytype} {keydata}", description)
elif args.set_ssh_key_in_initrd:
installer.set_ssh_key_in_initrd()
elif args.get_image_paths:
installer.get_image_paths(oglive_num = args.oglive)
else:
installer.install()
installer.install_dependencies()
installer.install_api()
installer.install_forgejo()
installer.add_forgejo_repo("windows", "Windows")
installer.add_forgejo_repo("linux", "Linux")
installer.add_forgejo_repo("mac", "Mac")
installer.add_ssh_key_from_squashfs(oglive_num = args.oglive, squashfs_file=args.squashfs_file, oglive_file = args.oglive_file or args.oglive_url)
except RequirementException as req:
show_error(f"Requisito para la instalación no satisfecho: {req.message}")
exit(1)
except OptionalDependencyException as optreq:
show_error(optreq.message)
exit(1)

View File

@ -0,0 +1,17 @@
#!/bin/bash
set -e
git clone https://github.com/dchevell/flask-executor opengnsys-flask-executor
cd opengnsys-flask-executor
version=`python3 ./setup.py --version`
cd ..
if [ -d "opengnsys-flask-executor-${version}" ] ; then
echo "Directory opengnsys-flask-executor-${version} already exists, won't overwrite"
exit 1
else
rm -rf opengnsys-flask-executor/.git
mv opengnsys-flask-executor "opengnsys-flask-executor-${version}"
tar -c --xz -v -f "opengnsys-flask-executor_${version}.orig.tar.xz" "opengnsys-flask-executor-${version}"
fi

View File

@ -0,0 +1,28 @@
name: Flask-Executor tests
on: [push]
jobs:
test:
runs-on: ubuntu-latest
strategy:
matrix:
python-version: ["3.7", "3.8", "3.9", "3.10"]
flask-version: ["<2.2", ">=2.2"]
steps:
- name: Checkout
uses: actions/checkout@v3
- name: Set up Python ${{ matrix.python-version }}
uses: actions/setup-python@v4
with:
python-version: ${{ matrix.python-version }}
- name: Install dependencies
run: |
python -m pip install --upgrade pip
pip install -q "flask ${{ matrix.flask-version }}"
pip install -e .[test]
- name: Test with pytest
run: |
pytest --cov=flask_executor/ --cov-report=xml
- name: Upload coverage to Codecov
uses: codecov/codecov-action@v3

View File

@ -0,0 +1,105 @@
# Byte-compiled / optimized / DLL files
__pycache__/
*.py[cod]
*$py.class
# C extensions
*.so
# Distribution / packaging
.Python
build/
develop-eggs/
dist/
downloads/
eggs/
.eggs/
lib/
lib64/
parts/
sdist/
var/
wheels/
*.egg-info/
.installed.cfg
*.egg
MANIFEST
# PyInstaller
# Usually these files are written by a python script from a template
# before PyInstaller builds the exe, so as to inject date/other infos into it.
*.manifest
*.spec
# Installer logs
pip-log.txt
pip-delete-this-directory.txt
# Unit test / coverage reports
htmlcov/
.tox/
.coverage
.coverage.*
.cache
nosetests.xml
coverage.xml
*.cover
*,cover
.hypothesis/
.pytest_cache/
# Translations
*.mo
*.pot
# Django stuff:
*.log
local_settings.py
db.sqlite3
# Flask stuff:
instance/
.webassets-cache
# Scrapy stuff:
.scrapy
# Sphinx documentation
docs/_build/
# PyBuilder
target/
# Jupyter Notebook
.ipynb_checkpoints
# pyenv
.python-version
# celery beat schedule file
celerybeat-schedule
# SageMath parsed files
*.sage.py
# Environments
.env
.venv
env/
venv/
ENV/
env.bak/
venv.bak/
# Spyder project settings
.spyderproject
.spyproject
# Rope project settings
.ropeproject
# mkdocs documentation
/site
# mypy
.mypy_cache/

View File

@ -0,0 +1,21 @@
MIT License
Copyright (c) 2018 Dave Chevell
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.

View File

@ -0,0 +1,134 @@
Flask-Executor
==============
[![Build Status](https://github.com/dchevell/flask-executor/actions/workflows/tests.yml/badge.svg)](https://github.com/dchevell/flask-executor/actions/workflows/tests.yml)
[![codecov](https://codecov.io/gh/dchevell/flask-executor/branch/master/graph/badge.svg)](https://codecov.io/gh/dchevell/flask-executor)
[![PyPI Version](https://img.shields.io/pypi/v/Flask-Executor.svg)](https://pypi.python.org/pypi/Flask-Executor)
[![GitHub license](https://img.shields.io/github/license/dchevell/flask-executor.svg)](https://github.com/dchevell/flask-executor/blob/master/LICENSE)
Sometimes you need a simple task queue without the overhead of separate worker processes or powerful-but-complex libraries beyond your requirements. Flask-Executor is an easy to use wrapper for the `concurrent.futures` module that lets you initialise and configure executors via common Flask application patterns. It's a great way to get up and running fast with a lightweight in-process task queue.
Installation
------------
Flask-Executor is available on PyPI and can be installed with:
pip install flask-executor
Quick start
-----------
Here's a quick example of using Flask-Executor inside your Flask application:
```python
from flask import Flask
from flask_executor import Executor
app = Flask(__name__)
executor = Executor(app)
def send_email(recipient, subject, body):
# Magic to send an email
return True
@app.route('/signup')
def signup():
# Do signup form
executor.submit(send_email, recipient, subject, body)
```
Contexts
--------
When calling `submit()` or `map()` Flask-Executor will wrap `ThreadPoolExecutor` callables with a
copy of both the current application context and current request context. Code that must be run in
these contexts or that depends on information or configuration stored in `flask.current_app`,
`flask.request` or `flask.g` can be submitted to the executor without modification.
Note: due to limitations in Python's default object serialisation and a lack of shared memory space between subprocesses, contexts cannot be pushed to `ProcessPoolExecutor()` workers.
Futures
-------
You may want to preserve access to Futures returned from the executor, so that you can retrieve the
results in a different part of your application. Flask-Executor allows Futures to be stored within
the executor itself and provides methods for querying and returning them in different parts of your
app::
```python
@app.route('/start-task')
def start_task():
executor.submit_stored('calc_power', pow, 323, 1235)
return jsonify({'result':'success'})
@app.route('/get-result')
def get_result():
if not executor.futures.done('calc_power'):
return jsonify({'status': executor.futures._state('calc_power')})
future = executor.futures.pop('calc_power')
return jsonify({'status': done, 'result': future.result()})
```
Decoration
----------
Flask-Executor lets you decorate methods in the same style as distributed task queues like
Celery:
```python
@executor.job
def fib(n):
if n <= 2:
return 1
else:
return fib(n-1) + fib(n-2)
@app.route('/decorate_fib')
def decorate_fib():
fib.submit(5)
fib.submit_stored('fibonacci', 5)
fib.map(range(1, 6))
return 'OK'
```
Default Callbacks
-----------------
Future objects can have callbacks attached by using `Future.add_done_callback`. Flask-Executor
lets you specify default callbacks that will be applied to all new futures created by the executor:
```python
def some_callback(future):
# do something with future
executor.add_default_done_callback(some_callback)
# Callback will be added to the below task automatically
executor.submit(pow, 323, 1235)
```
Propagate Exceptions
--------------------
Normally any exceptions thrown by background threads or processes will be swallowed unless explicitly
checked for. To instead surface all exceptions thrown by background tasks, Flask-Executor can add
a special default callback that raises any exceptions thrown by tasks submitted to the executor::
```python
app.config['EXECUTOR_PROPAGATE_EXCEPTIONS'] = True
```
Documentation
-------------
Check out the full documentation at [flask-executor.readthedocs.io](https://flask-executor.readthedocs.io)!

View File

@ -0,0 +1,7 @@
opengnsys-flask-executor (0.10.0) UNRELEASED; urgency=medium
Initial version
*
*
-- Vadim Troshchinskiy <vtroshchinskiy@qindel.com> Tue, 23 Dec 2024 10:47:04 +0000

View File

@ -0,0 +1,28 @@
Source: opengnsys-flask-executor
Maintainer: OpenGnsys <opengnsys@opengnsys.org>
Section: python
Priority: optional
Build-Depends: debhelper-compat (= 12),
dh-python,
libarchive-dev,
python3-all,
python3-mock,
python3-pytest,
python3-setuptools
Standards-Version: 4.5.0
Rules-Requires-Root: no
Homepage: https://github.com/vojtechtrefny/pyblkid
Vcs-Browser: https://github.com/vojtechtrefny/pyblkid
Vcs-Git: https://github.com/vojtechtrefny/pyblkid
Package: opengnsys-flask-executor
Architecture: all
Depends: ${lib:Depends}, ${misc:Depends}, ${python3:Depends}
Description: Python3 Flask-Executor module
Sometimes you need a simple task queue without the overhead of separate worker
processes or powerful-but-complex libraries beyond your requirements.
.
Flask-Executor is an easy to use wrapper for the concurrent.futures module that
lets you initialise and configure executors via common Flask application patterns.
It's a great way to get up and running fast with a lightweight in-process task queue.
.

View File

@ -0,0 +1,208 @@
Format: https://www.debian.org/doc/packaging-manuals/copyright-format/1.0/
Upstream-Name: python-libarchive-c
Source: https://github.com/Changaco/python-libarchive-c
Files: *
Copyright: 2014-2018 Changaco <changaco@changaco.oy.lc>
License: CC-0
Files: tests/surrogateescape.py
Copyright: 2015 Changaco <changaco@changaco.oy.lc>
2011-2013 Victor Stinner <victor.stinner@gmail.com>
License: BSD-2-clause or PSF-2
Files: debian/*
Copyright: 2015 Jerémy Bobbio <lunar@debian.org>
2019 Mattia Rizzolo <mattia@debian.org>
License: permissive
Copying and distribution of this package, with or without
modification, are permitted in any medium without royalty
provided the copyright notice and this notice are
preserved.
License: BSD-2-clause
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions
are met:
* Redistributions of source code must retain the above copyright
notice, this list of conditions and the following disclaimer.
* Redistributions in binary form must reproduce the above copyright
notice, this list of conditions and the following disclaimer in
the documentation and/or other materials provided with the
distribution.
.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
"AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS
FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE
COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT,
INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING,
BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS
OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED
AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT
OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF
SUCH DAMAGE.
License: PSF-2
1. This LICENSE AGREEMENT is between the Python Software Foundation ("PSF"),
and the Individual or Organization ("Licensee") accessing and otherwise using
this software ("Python") in source or binary form and its associated
documentation.
.
2. Subject to the terms and conditions of this License Agreement, PSF hereby
grants Licensee a nonexclusive, royalty-free, world-wide license to
reproduce, analyze, test, perform and/or display publicly, prepare derivative
works, distribute, and otherwise use Python alone or in any derivative
version, provided, however, that PSF's License Agreement and PSF's notice of
copyright, i.e., "Copyright (c) 2001, 2002, 2003, 2004, 2005, 2006 Python
Software Foundation; All Rights Reserved" are retained in Python alone or in
any derivative version prepared by Licensee.
.
3. In the event Licensee prepares a derivative work that is based on or
incorporates Python or any part thereof, and wants to make the derivative
work available to others as provided herein, then Licensee hereby agrees to
include in any such work a brief summary of the changes made to Python.
.
4. PSF is making Python available to Licensee on an "AS IS" basis. PSF MAKES
NO REPRESENTATIONS OR WARRANTIES, EXPRESS OR IMPLIED. BY WAY OF EXAMPLE, BUT
NOT LIMITATION, PSF MAKES NO AND DISCLAIMS ANY REPRESENTATION OR WARRANTY OF
MERCHANTABILITY OR FITNESS FOR ANY PARTICULAR PURPOSE OR THAT THE USE OF
PYTHON WILL NOT INFRINGE ANY THIRD PARTY RIGHTS.
.
5. PSF SHALL NOT BE LIABLE TO LICENSEE OR ANY OTHER USERS OF PYTHON FOR ANY
INCIDENTAL, SPECIAL, OR CONSEQUENTIAL DAMAGES OR LOSS AS A RESULT OF
MODIFYING, DISTRIBUTING, OR OTHERWISE USING PYTHON, OR ANY DERIVATIVE
THEREOF, EVEN IF ADVISED OF THE POSSIBILITY THEREOF.
.
6. This License Agreement will automatically terminate upon a material breach
of its terms and conditions.
.
7. Nothing in this License Agreement shall be deemed to create any
relationship of agency, partnership, or joint venture between PSF and
Licensee. This License Agreement does not grant permission to use PSF
trademarks or trade name in a trademark sense to endorse or promote products
or services of Licensee, or any third party.
.
8. By copying, installing or otherwise using Python, Licensee agrees to be
bound by the terms and conditions of this License Agreement.
License: CC-0
Statement of Purpose
.
The laws of most jurisdictions throughout the world automatically
confer exclusive Copyright and Related Rights (defined below) upon
the creator and subsequent owner(s) (each and all, an "owner") of an
original work of authorship and/or a database (each, a "Work").
.
Certain owners wish to permanently relinquish those rights to a Work
for the purpose of contributing to a commons of creative, cultural
and scientific works ("Commons") that the public can reliably and
without fear of later claims of infringement build upon, modify,
incorporate in other works, reuse and redistribute as freely as
possible in any form whatsoever and for any purposes, including
without limitation commercial purposes. These owners may contribute
to the Commons to promote the ideal of a free culture and the further
production of creative, cultural and scientific works, or to gain
reputation or greater distribution for their Work in part through the
use and efforts of others.
.
For these and/or other purposes and motivations, and without any
expectation of additional consideration or compensation, the person
associating CC0 with a Work (the "Affirmer"), to the extent that he
or she is an owner of Copyright and Related Rights in the Work,
voluntarily elects to apply CC0 to the Work and publicly distribute
the Work under its terms, with knowledge of his or her Copyright and
Related Rights in the Work and the meaning and intended legal effect
of CC0 on those rights.
.
1. Copyright and Related Rights. A Work made available under CC0 may
be protected by copyright and related or neighboring rights
("Copyright and Related Rights"). Copyright and Related Rights
include, but are not limited to, the following:
.
i. the right to reproduce, adapt, distribute, perform, display,
communicate, and translate a Work;
ii. moral rights retained by the original author(s) and/or
performer(s);
iii. publicity and privacy rights pertaining to a person's image
or likeness depicted in a Work;
iv. rights protecting against unfair competition in regards to a
Work, subject to the limitations in paragraph 4(a), below;
v. rights protecting the extraction, dissemination, use and
reuse of data in a Work;
vi. database rights (such as those arising under Directive
96/9/EC of the European Parliament and of the Council of 11
March 1996 on the legal protection of databases, and under
any national implementation thereof, including any amended or
successor version of such directive); and
vii. other similar, equivalent or corresponding rights throughout
the world based on applicable law or treaty, and any national
implementations thereof.
.
2. Waiver. To the greatest extent permitted by, but not in
contravention of, applicable law, Affirmer hereby overtly, fully,
permanently, irrevocably and unconditionally waives, abandons, and
surrenders all of Affirmer's Copyright and Related Rights and
associated claims and causes of action, whether now known or
unknown (including existing as well as future claims and causes of
action), in the Work (i) in all territories worldwide, (ii) for
the maximum duration provided by applicable law or treaty
(including future time extensions), (iii) in any current or future
medium and for any number of copies, and (iv) for any purpose
whatsoever, including without limitation commercial, advertising
or promotional purposes (the "Waiver"). Affirmer makes the Waiver
for the benefit of each member of the public at large and to the
detriment of Affirmer's heirs and successors, fully intending that
such Waiver shall not be subject to revocation, rescission,
cancellation, termination, or any other legal or equitable action
to disrupt the quiet enjoyment of the Work by the public as
contemplated by Affirmer's express Statement of Purpose.
.
3. Public License Fallback. Should any part of the Waiver for any
reason be judged legally invalid or ineffective under applicable law,
then the Waiver shall be preserved to the maximum extent permitted
taking into account Affirmer's express Statement of Purpose. In
addition, to the extent the Waiver is so judged Affirmer hereby
grants to each affected person a royalty-free, non transferable, non
sublicensable, non exclusive, irrevocable and unconditional license
to exercise Affirmer's Copyright and Related Rights in the Work (i)
in all territories worldwide, (ii) for the maximum duration provided
by applicable law or treaty (including future time extensions), (iii)
in any current or future medium and for any number of copies, and
(iv) for any purpose whatsoever, including without limitation
commercial, advertising or promotional purposes (the "License"). The
License shall be deemed effective as of the date CC0 was applied by
Affirmer to the Work. Should any part of the License for any reason
be judged legally invalid or ineffective under applicable law, such
partial invalidity or ineffectiveness shall not invalidate the
remainder of the License, and in such case Affirmer hereby affirms
that he or she will not (i) exercise any of his or her remaining
Copyright and Related Rights in the Work or (ii) assert any
associated claims and causes of action with respect to the Work, in
either case contrary to Affirmer's express Statement of Purpose.
.
4. Limitations and Disclaimers.
.
a. No trademark or patent rights held by Affirmer are waived,
abandoned, surrendered, licensed or otherwise affected by
this document.
b. Affirmer offers the Work as-is and makes no representations
or warranties of any kind concerning the Work, express,
implied, statutory or otherwise, including without limitation
warranties of title, merchantability, fitness for a
particular purpose, non infringement, or the absence of
latent or other defects, accuracy, or the present or absence
of errors, whether or not discoverable, all to the greatest
extent permissible under applicable law.
c. Affirmer disclaims responsibility for clearing rights of
other persons that may apply to the Work or any use thereof,
including without limitation any person's Copyright and
Related Rights in the Work. Further, Affirmer disclaims
responsibility for obtaining any necessary consents,
permissions or other rights required for any use of the
Work.
d. Affirmer understands and acknowledges that Creative Commons
is not a party to this document and has no duty or obligation
with respect to this CC0 or use of the Work.

View File

@ -0,0 +1,22 @@
#!/usr/bin/make -f
export LC_ALL=C.UTF-8
export PYBUILD_NAME = libarchive-c
#export PYBUILD_BEFORE_TEST = cp -av README.rst {build_dir}
export PYBUILD_TEST_ARGS = -vv -s
#export PYBUILD_AFTER_TEST = rm -v {build_dir}/README.rst
# ./usr/lib/python3/dist-packages/libarchive/
export PYBUILD_INSTALL_ARGS=--install-lib=/usr/share/opengnsys-modules/python3/dist-packages/
%:
dh $@ --with python3 --buildsystem=pybuild
override_dh_gencontrol:
dh_gencontrol -- \
-Vlib:Depends=$(shell dpkg-query -W -f '$${Depends}' libarchive-dev \
| sed -E 's/.*(libarchive[[:alnum:].-]+).*/\1/')
override_dh_installdocs:
# Nothing, we don't want docs
override_dh_installchangelogs:
# Nothing, we don't want the changelog

View File

@ -0,0 +1 @@
3.0 (quilt)

View File

@ -0,0 +1,2 @@
Tests: upstream-tests
Depends: @, python3-mock, python3-pytest

View File

@ -0,0 +1,14 @@
#!/bin/sh
set -e
if ! [ -d "$AUTOPKGTEST_TMP" ]; then
echo "AUTOPKGTEST_TMP not set." >&2
exit 1
fi
cp -rv tests "$AUTOPKGTEST_TMP"
cd "$AUTOPKGTEST_TMP"
mkdir -v libarchive
touch README.rst
py.test-3 tests -vv -l -r a

View File

@ -0,0 +1,20 @@
# Minimal makefile for Sphinx documentation
#
# You can set these variables from the command line.
SPHINXOPTS =
SPHINXBUILD = sphinx-build
SPHINXPROJ = Flask-Executor
SOURCEDIR = .
BUILDDIR = _build
# Put it first so that "make" without argument is like "make help".
help:
@$(SPHINXBUILD) -M help "$(SOURCEDIR)" "$(BUILDDIR)" $(SPHINXOPTS) $(O)
.PHONY: help Makefile
# Catch-all target: route all unknown targets to Sphinx using the new
# "make mode" option. $(O) is meant as a shortcut for $(SPHINXOPTS).
%: Makefile
@$(SPHINXBUILD) -M $@ "$(SOURCEDIR)" "$(BUILDDIR)" $(SPHINXOPTS) $(O)

View File

@ -0,0 +1,30 @@
flask\_executor package
=======================
Submodules
----------
flask\_executor.executor module
-------------------------------
.. automodule:: flask_executor.executor
:members:
:undoc-members:
:show-inheritance:
flask\_executor.futures module
------------------------------
.. automodule:: flask_executor.futures
:members:
:undoc-members:
:show-inheritance:
Module contents
---------------
.. automodule:: flask_executor
:members:
:undoc-members:
:show-inheritance:

View File

@ -0,0 +1,7 @@
flask_executor
==============
.. toctree::
:maxdepth: 4
flask_executor

View File

@ -0,0 +1,172 @@
# -*- coding: utf-8 -*-
#
# Configuration file for the Sphinx documentation builder.
#
# This file does only contain a selection of the most common options. For a
# full list see the documentation:
# http://www.sphinx-doc.org/en/master/config
# -- Path setup --------------------------------------------------------------
# If extensions (or modules to document with autodoc) are in another directory,
# add these directories to sys.path here. If the directory is relative to the
# documentation root, use os.path.abspath to make it absolute, like shown here.
#
import os
import sys
from flask_executor import __version__
sys.path.insert(0, os.path.abspath('..'))
# -- Project information -----------------------------------------------------
project = 'Flask-Executor'
copyright = '2018, Dave Chevell'
author = 'Dave Chevell'
# The short X.Y version
version = '.'.join(__version__.split('.')[:2])
# The full version, including alpha/beta/rc tags
release = __version__
# -- General configuration ---------------------------------------------------
# If your documentation needs a minimal Sphinx version, state it here.
#
# needs_sphinx = '1.0'
# Add any Sphinx extension module names here, as strings. They can be
# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom
# ones.
extensions = [
'sphinx.ext.autodoc',
'sphinx.ext.intersphinx',
'sphinx.ext.viewcode',
]
# Add any paths that contain templates here, relative to this directory.
templates_path = ['_templates']
# The suffix(es) of source filenames.
# You can specify multiple suffix as a list of string:
#
# source_suffix = ['.rst', '.md']
source_suffix = '.rst'
# The master toctree document.
master_doc = 'index'
# The language for content autogenerated by Sphinx. Refer to documentation
# for a list of supported languages.
#
# This is also used if you do content translation via gettext catalogs.
# Usually you set "language" from the command line for these cases.
language = None
# List of patterns, relative to source directory, that match files and
# directories to ignore when looking for source files.
# This pattern also affects html_static_path and html_extra_path .
exclude_patterns = ['_build', 'Thumbs.db', '.DS_Store']
# The name of the Pygments (syntax highlighting) style to use.
pygments_style = 'sphinx'
# -- Options for HTML output -------------------------------------------------
# The theme to use for HTML and HTML Help pages. See the documentation for
# a list of builtin themes.
#
html_theme = 'alabaster'
# Theme options are theme-specific and customize the look and feel of a theme
# further. For a list of options available for each theme, see the
# documentation.
#
# html_theme_options = {}
# Add any paths that contain custom static files (such as style sheets) here,
# relative to this directory. They are copied after the builtin static files,
# so a file named "default.css" will overwrite the builtin "default.css".
# html_static_path = ['_static']
# Custom sidebar templates, must be a dictionary that maps document names
# to template names.
#
# The default sidebars (for documents that don't match any pattern) are
# defined by theme itself. Builtin themes are using these templates by
# default: ``['localtoc.html', 'relations.html', 'sourcelink.html',
# 'searchbox.html']``.
#
# html_sidebars = {}
# -- Options for HTMLHelp output ---------------------------------------------
# Output file base name for HTML help builder.
htmlhelp_basename = 'Flask-Executordoc'
# -- Options for LaTeX output ------------------------------------------------
latex_elements = {
# The paper size ('letterpaper' or 'a4paper').
#
# 'papersize': 'letterpaper',
# The font size ('10pt', '11pt' or '12pt').
#
# 'pointsize': '10pt',
# Additional stuff for the LaTeX preamble.
#
# 'preamble': '',
# Latex figure (float) alignment
#
# 'figure_align': 'htbp',
}
# Grouping the document tree into LaTeX files. List of tuples
# (source start file, target name, title,
# author, documentclass [howto, manual, or own class]).
latex_documents = [
(master_doc, 'Flask-Executor.tex', 'Flask-Executor Documentation',
'Dave Chevell', 'manual'),
]
# -- Options for manual page output ------------------------------------------
# One entry per manual page. List of tuples
# (source start file, name, description, authors, manual section).
man_pages = [
(master_doc, 'flask-executor', 'Flask-Executor Documentation',
[author], 1)
]
# -- Options for Texinfo output ----------------------------------------------
# Grouping the document tree into Texinfo files. List of tuples
# (source start file, target name, title, author,
# dir menu entry, description, category)
texinfo_documents = [
(master_doc, 'Flask-Executor', 'Flask-Executor Documentation',
author, 'Flask-Executor', 'One line description of project.',
'Miscellaneous'),
]
# -- Extension configuration -------------------------------------------------
# -- Options for intersphinx extension ---------------------------------------
# Example configuration for intersphinx: refer to the Python standard library.
intersphinx_mapping = {
'python': ('https://docs.python.org/3', None),
'http://flask.pocoo.org/docs/': None,
}

View File

@ -0,0 +1,187 @@
.. Flask-Executor documentation master file, created by
sphinx-quickstart on Sun Sep 23 18:52:39 2018.
You can adapt this file completely to your liking, but it should at least
contain the root `toctree` directive.
Flask-Executor
==============
.. module:: flask_executor
Flask-Executor is a `Flask`_ extension that makes it easy to work with :py:mod:`concurrent.futures`
in your application.
Installation
------------
Flask-Executor is available on PyPI and can be installed with pip::
$ pip install flask-executor
Setup
------
The Executor extension can either be initialized directly::
from flask import Flask
from flask_executor import Executor
app = Flask(__name__)
executor = Executor(app)
Or through the factory method::
executor = Executor()
executor.init_app(app)
Configuration
-------------
To specify the type of executor to initialise, set ``EXECUTOR_TYPE`` inside your app configuration.
Valid values are ``'thread'`` (default) to initialise a
:class:`~concurrent.futures.ThreadPoolExecutor`, or ``'process'`` to initialise a
:class:`~concurrent.futures.ProcessPoolExecutor`::
app.config['EXECUTOR_TYPE'] = 'thread'
To define the number of worker threads for a :class:`~concurrent.futures.ThreadPoolExecutor` or the
number of worker processes for a :class:`~concurrent.futures.ProcessPoolExecutor`, set
``EXECUTOR_MAX_WORKERS`` in your app configuration. Valid values are any integer or ``None`` (default)
to let :py:mod:`concurrent.futures` pick defaults for you::
app.config['EXECUTOR_MAX_WORKERS'] = 5
If multiple executors are needed, :class:`flask_executor.Executor` can be initialised with a ``name``
parameter. Named executors will look for configuration variables prefixed with the specified ``name``
value, uppercased:
app.config['CUSTOM_EXECUTOR_TYPE'] = 'thread'
app.config['CUSTOM_EXECUTOR_MAX_WORKERS'] = 5
executor = Executor(app, name='custom')
Basic Usage
-----------
Flask-Executor supports the standard :class:`concurrent.futures.Executor` methods,
:meth:`~concurrent.futures.Executor.submit` and :meth:`~concurrent.futures.Executor.map`::
def fib(n):
if n <= 2:
return 1
else:
return fib(n-1) + fib(n-2)
@app.route('/run_fib')
def run_fib():
executor.submit(fib, 5)
executor.map(fib, range(1, 6))
return 'OK'
Submitting a task via :meth:`~concurrent.futures.Executor.submit` returns a
:class:`flask_executor.FutureProxy` object, a subclass of
:class:`concurrent.futures.Future` object from which you can retrieve your job status or result.
Contexts
--------
When calling :meth:`~concurrent.futures.Executor.submit` or :meth:`~concurrent.futures.Executor.map`
Flask-Executor will wrap `ThreadPoolExecutor` callables with a copy of both the current application
context and current request context. Code that must be run in these contexts or that depends on
information or configuration stored in :data:`flask.current_app`, :data:`flask.request` or
:data:`flask.g` can be submitted to the executor without modification.
Note: due to limitations in Python's default object serialisation and a lack of shared memory space between subprocesses, contexts cannot be pushed to `ProcessPoolExecutor()` workers.
Futures
-------
:class:`flask_executor.FutureProxy` objects look and behave like normal :class:`concurrent.futures.Future`
objects, but allow `flask_executor` to override certain methods and add additional behaviours.
When submitting a callable to :meth:`~concurrent.futures.Future.add_done_callback`, callables are
wrapped with a copy of both the current application context and current request context.
You may want to preserve access to Futures returned from the executor, so that you can retrieve the
results in a different part of your application. Flask-Executor allows Futures to be stored within
the executor itself and provides methods for querying and returning them in different parts of your
app::
@app.route('/start-task')
def start_task():
executor.submit_stored('calc_power', pow, 323, 1235)
return jsonify({'result':'success'})
@app.route('/get-result')
def get_result():
if not executor.futures.done('calc_power'):
return jsonify({'status': executor.futures._state('calc_power')})
future = executor.futures.pop('calc_power')
return jsonify({'status': done, 'result': future.result()})
Decoration
----------
Flask-Executor lets you decorate methods in the same style as distributed task queues when using 'thread' executor type like
`Celery`_::
@executor.job
def fib(n):
if n <= 2:
return 1
else:
return fib(n-1) + fib(n-2)
@app.route('/decorate_fib')
def decorate_fib():
fib.submit(5)
fib.submit_stored('fibonacci', 5)
fib.map(range(1, 6))
return 'OK'
.. toctree::
:maxdepth: 2
:caption: Contents:
api/modules
Default Callbacks
-----------------
:class:`concurrent.futures.Future` objects can have callbacks attached by using
:meth:`~concurrent.futures.Future.add_done_callback`. Flask-Executor lets you specify default
callbacks that will be applied to all new futures created by the executor::
def some_callback(future):
# do something with future
executor.add_default_done_callback(some_callback)
# Callback will be added to the below task automatically
executor.submit(pow, 323, 1235)
Propagate Exceptions
--------------------
Normally any exceptions thrown by background threads or processes will be swallowed unless explicitly
checked for. To instead surface all exceptions thrown by background tasks, Flask-Executor can add
a special default callback that raises any exceptions thrown by tasks submitted to the executor::
app.config['EXECUTOR_PROPAGATE_EXCEPTIONS'] = True
Indices and tables
==================
* :ref:`genindex`
* :ref:`modindex`
* :ref:`search`
.. _Flask: http://flask.pocoo.org/
.. _Celery: http://www.celeryproject.org/

View File

@ -0,0 +1,5 @@
from flask_executor.executor import Executor
__all__ = ('Executor',)
__version__ = '0.10.0'

View File

@ -0,0 +1,273 @@
import concurrent.futures
import contextvars
import copy
import re
from flask import copy_current_request_context, current_app, g
from flask_executor.futures import FutureCollection, FutureProxy
from flask_executor.helpers import InstanceProxy, str2bool
def get_current_app_context():
try:
from flask.globals import _cv_app
return _cv_app.get(None)
except ImportError:
from flask.globals import _app_ctx_stack
return _app_ctx_stack.top
def push_app_context(fn):
app = current_app._get_current_object()
_g = copy.copy(g)
def wrapper(*args, **kwargs):
with app.app_context():
ctx = get_current_app_context()
ctx.g = _g
return fn(*args, **kwargs)
return wrapper
def propagate_exceptions_callback(future):
exc = future.exception()
if exc:
raise exc
class ExecutorJob:
"""Wraps a function with an executor so to allow the wrapped function to
submit itself directly to the executor."""
def __init__(self, executor, fn):
self.executor = executor
self.fn = fn
def submit(self, *args, **kwargs):
future = self.executor.submit(self.fn, *args, **kwargs)
return future
def submit_stored(self, future_key, *args, **kwargs):
future = self.executor.submit_stored(future_key, self.fn, *args, **kwargs)
return future
def map(self, *iterables, **kwargs):
results = self.executor.map(self.fn, *iterables, **kwargs)
return results
class Executor(InstanceProxy, concurrent.futures._base.Executor):
"""An executor interface for :py:mod:`concurrent.futures` designed for
working with Flask applications.
:param app: A Flask application instance.
:param name: An optional name for the executor. This can be used to
configure multiple executors. Named executors will look for
environment variables prefixed with the name in uppercase,
e.g. ``CUSTOM_EXECUTOR_TYPE``.
"""
def __init__(self, app=None, name=''):
self.app = app
self._default_done_callbacks = []
self.futures = FutureCollection()
if re.match(r'^(\w+)?$', name) is None:
raise ValueError(
"Executor names may only contain letters, numbers or underscores"
)
self.name = name
prefix = name.upper() + '_' if name else ''
self.EXECUTOR_TYPE = prefix + 'EXECUTOR_TYPE'
self.EXECUTOR_MAX_WORKERS = prefix + 'EXECUTOR_MAX_WORKERS'
self.EXECUTOR_FUTURES_MAX_LENGTH = prefix + 'EXECUTOR_FUTURES_MAX_LENGTH'
self.EXECUTOR_PROPAGATE_EXCEPTIONS = prefix + 'EXECUTOR_PROPAGATE_EXCEPTIONS'
self.EXECUTOR_PUSH_APP_CONTEXT = prefix + 'EXECUTOR_PUSH_APP_CONTEXT'
if app is not None:
self.init_app(app)
def init_app(self, app):
"""Initialise application. This will also intialise the configured
executor type:
* :class:`concurrent.futures.ThreadPoolExecutor`
* :class:`concurrent.futures.ProcessPoolExecutor`
"""
app.config.setdefault(self.EXECUTOR_TYPE, 'thread')
app.config.setdefault(self.EXECUTOR_PUSH_APP_CONTEXT, True)
futures_max_length = app.config.setdefault(self.EXECUTOR_FUTURES_MAX_LENGTH, None)
propagate_exceptions = app.config.setdefault(self.EXECUTOR_PROPAGATE_EXCEPTIONS, False)
if futures_max_length is not None:
self.futures.max_length = int(futures_max_length)
if str2bool(propagate_exceptions):
self.add_default_done_callback(propagate_exceptions_callback)
self._self = self._make_executor(app)
app.extensions[self.name + 'executor'] = self
def _make_executor(self, app):
executor_max_workers = app.config.setdefault(self.EXECUTOR_MAX_WORKERS, None)
if executor_max_workers is not None:
executor_max_workers = int(executor_max_workers)
executor_type = app.config[self.EXECUTOR_TYPE]
if executor_type == 'thread':
_executor = concurrent.futures.ThreadPoolExecutor
elif executor_type == 'process':
_executor = concurrent.futures.ProcessPoolExecutor
else:
raise ValueError("{} is not a valid executor type.".format(executor_type))
return _executor(max_workers=executor_max_workers)
def _prepare_fn(self, fn, force_copy=False):
if isinstance(self._self, concurrent.futures.ThreadPoolExecutor) \
or force_copy:
fn = copy_current_request_context(fn)
if current_app.config[self.EXECUTOR_PUSH_APP_CONTEXT]:
fn = push_app_context(fn)
return fn
def submit(self, fn, *args, **kwargs):
r"""Schedules the callable, fn, to be executed as fn(\*args \**kwargs)
and returns a :class:`~flask_executor.futures.FutureProxy` object, a
:class:`~concurrent.futures.Future` subclass representing
the execution of the callable.
See also :meth:`concurrent.futures.Executor.submit`.
Callables are wrapped a copy of the current application context and the
current request context. Code that depends on information or
configuration stored in :data:`flask.current_app`,
:data:`flask.request` or :data:`flask.g` can be run without
modification.
Note: Because callables only have access to *copies* of the application
or request contexts any changes made to these copies will not be
reflected in the original view. Further, changes in the original app or
request context that occur after the callable is submitted will not be
available to the callable.
Example::
future = executor.submit(pow, 323, 1235)
print(future.result())
:param fn: The callable to be executed.
:param \*args: A list of positional parameters used with
the callable.
:param \**kwargs: A dict of named parameters used with
the callable.
:rtype: flask_executor.FutureProxy
"""
fn = self._prepare_fn(fn)
future = self._self.submit(fn, *args, **kwargs)
for callback in self._default_done_callbacks:
future.add_done_callback(callback)
return FutureProxy(future, self)
def submit_stored(self, future_key, fn, *args, **kwargs):
r"""Submits the callable using :meth:`Executor.submit` and stores the
Future in the executor via a
:class:`~flask_executor.futures.FutureCollection` object available at
:data:`Executor.futures`. These futures can be retrieved anywhere
inside your application and queried for status or popped from the
collection. Due to memory concerns, the maximum length of the
FutureCollection is limited, and the oldest Futures will be dropped
when the limit is exceeded.
See :class:`flask_executor.futures.FutureCollection` for more
information on how to query futures in a collection.
Example::
@app.route('/start-task')
def start_task():
executor.submit_stored('calc_power', pow, 323, 1235)
return jsonify({'result':'success'})
@app.route('/get-result')
def get_result():
if not executor.futures.done('calc_power'):
future_status = executor.futures._state('calc_power')
return jsonify({'status': future_status})
future = executor.futures.pop('calc_power')
return jsonify({'status': done, 'result': future.result()})
:param future_key: Stores the Future for the submitted task inside the
executor's ``futures`` object with the specified
key.
:param fn: The callable to be executed.
:param \*args: A list of positional parameters used with
the callable.
:param \**kwargs: A dict of named parameters used with
the callable.
:rtype: concurrent.futures.Future
"""
future = self.submit(fn, *args, **kwargs)
self.futures.add(future_key, future)
return future
def map(self, fn, *iterables, **kwargs):
r"""Submits the callable, fn, and an iterable of arguments to the
executor and returns the results inside a generator.
See also :meth:`concurrent.futures.Executor.map`.
Callables are wrapped a copy of the current application context and the
current request context. Code that depends on information or
configuration stored in :data:`flask.current_app`,
:data:`flask.request` or :data:`flask.g` can be run without
modification.
Note: Because callables only have access to *copies* of the application
or request contexts
any changes made to these copies will not be reflected in the original
view. Further, changes in the original app or request context that
occur after the callable is submitted will not be available to the
callable.
:param fn: The callable to be executed.
:param \*iterables: An iterable of arguments the callable will apply to.
:param \**kwargs: A dict of named parameters to pass to the underlying
executor's :meth:`~concurrent.futures.Executor.map`
method.
"""
fn = self._prepare_fn(fn)
return self._self.map(fn, *iterables, **kwargs)
def job(self, fn):
"""Decorator. Use this to transform functions into `ExecutorJob`
instances that can submit themselves directly to the executor.
Example::
@executor.job
def fib(n):
if n <= 2:
return 1
else:
return fib(n-1) + fib(n-2)
future = fib.submit(5)
results = fib.map(range(1, 6))
"""
if isinstance(self._self, concurrent.futures.ProcessPoolExecutor):
raise TypeError(
"Can't decorate {}: Executors that use multiprocessing "
"don't support decorators".format(fn)
)
return ExecutorJob(executor=self, fn=fn)
def add_default_done_callback(self, fn):
"""Registers callable to be attached to all newly created futures. When a
callable is submitted to the executor,
:meth:`concurrent.futures.Future.add_done_callback` is called for every default
callable that has been set."
:param fn: The callable to be added to the list of default done callbacks for new
Futures.
"""
self._default_done_callbacks.append(fn)

View File

@ -0,0 +1,107 @@
from collections import OrderedDict
from concurrent.futures import Future
from flask_executor.helpers import InstanceProxy
class FutureCollection:
"""A FutureCollection is an object to store and interact with
:class:`concurrent.futures.Future` objects. It provides access to all
attributes and methods of a Future by proxying attribute calls to the
stored Future object.
To access the methods of a Future from a FutureCollection instance, include
a valid ``future_key`` value as the first argument of the method call. To
access attributes, call them as though they were a method with
``future_key`` as the sole argument. If ``future_key`` does not exist, the
call will always return None. If ``future_key`` does exist but the
referenced Future does not contain the requested attribute an
:exc:`AttributeError` will be raised.
To prevent memory exhaustion a FutureCollection instance can be bounded by
number of items using the ``max_length`` parameter. As a best practice,
Futures should be popped once they are ready for use, with the proxied
attribute form used to determine whether a Future is ready to be used or
discarded.
:param max_length: Maximum number of Futures to store. Oldest Futures are
discarded first.
"""
def __init__(self, max_length=50):
self.max_length = max_length
self._futures = OrderedDict()
def __contains__(self, future):
return future in self._futures.values()
def __len__(self):
return len(self._futures)
def __getattr__(self, attr):
# Call any valid Future method or attribute
def _future_attr(future_key, *args, **kwargs):
if future_key not in self._futures:
return None
future_attr = getattr(self._futures[future_key], attr)
if callable(future_attr):
return future_attr(*args, **kwargs)
return future_attr
return _future_attr
def _check_limits(self):
if self.max_length is not None:
while len(self._futures) > self.max_length:
self._futures.popitem(last=False)
def add(self, future_key, future):
"""Add a new Future. If ``max_length`` limit was defined for the
FutureCollection, old Futures may be dropped to respect this limit.
:param future_key: Key for the Future to be added.
:param future: Future to be added.
"""
if future_key in self._futures:
raise ValueError("future_key {} already exists".format(future_key))
self._futures[future_key] = future
self._check_limits()
def pop(self, future_key):
"""Return a Future and remove it from the collection. Futures that are
ready to be used should always be popped so they do not continue to
consume memory.
Returns ``None`` if the key doesn't exist.
:param future_key: Key for the Future to be returned.
"""
return self._futures.pop(future_key, None)
class FutureProxy(InstanceProxy, Future):
"""A FutureProxy is an instance proxy that wraps an instance of
:class:`concurrent.futures.Future`. Since an executor can't be made to
return a subclassed Future object, this proxy class is used to override
instance behaviours whilst providing an agnostic method of accessing
the original methods and attributes.
:param future: An instance of :class:`~concurrent.futures.Future` that
the proxy will provide access to.
:param executor: An instance of :class:`flask_executor.Executor` which
will be used to provide access to Flask context features.
"""
def __init__(self, future, executor):
self._self = future
self._executor = executor
def add_done_callback(self, fn):
fn = self._executor._prepare_fn(fn, force_copy=True)
return self._self.add_done_callback(fn)
def __eq__(self, obj):
return self._self == obj
def __hash__(self):
return self._self.__hash__()

View File

@ -0,0 +1,37 @@
PROXIED_OBJECT = '__proxied_object'
def str2bool(v):
return str(v).lower() in ("yes", "true", "t", "1")
class InstanceProxy(object):
def __init__(self, proxied_obj):
self._self = proxied_obj
@property
def _self(self):
try:
return object.__getattribute__(self, PROXIED_OBJECT)
except AttributeError:
return None
@_self.setter
def _self(self, proxied_obj):
object.__setattr__(self, PROXIED_OBJECT, proxied_obj)
return self
def __getattribute__(self, attr):
super_cls_dict = InstanceProxy.__dict__
cls_dict = object.__getattribute__(self, '__class__').__dict__
inst_dict = object.__getattribute__(self, '__dict__')
if attr in cls_dict or attr in inst_dict or attr in super_cls_dict:
return object.__getattribute__(self, attr)
target_obj = object.__getattribute__(self, PROXIED_OBJECT)
return object.__getattribute__(target_obj, attr)
def __repr__(self):
class_name = object.__getattribute__(self, '__class__').__name__
target_repr = repr(self._self)
return '<%s( %s )>' % (class_name, target_repr)

View File

@ -0,0 +1,52 @@
import setuptools
from setuptools.command.test import test
import sys
try:
from flask_executor import __version__ as version
except ImportError:
import re
pattern = re.compile(r"__version__ = '(.*)'")
with open('flask_executor/__init__.py') as f:
version = pattern.search(f.read()).group(1)
with open('README.md', 'r') as fh:
long_description = fh.read()
class pytest(test):
def run_tests(self):
import pytest
errno = pytest.main(self.test_args)
sys.exit(errno)
setuptools.setup(
name='Flask-Executor',
version=version,
author='Dave Chevell',
author_email='chevell@gmail.com',
description='An easy to use Flask wrapper for concurrent.futures',
long_description=long_description,
long_description_content_type='text/markdown',
url='https://github.com/dchevell/flask-executor',
packages=setuptools.find_packages(exclude=['tests']),
keywords=['flask', 'concurrent.futures'],
classifiers=[
"Programming Language :: Python :: 3",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent",
],
license='MIT',
install_requires=['Flask'],
extras_require={
':python_version == "2.7"': ['futures>=3.1.1'],
'test': ['pytest', 'pytest-cov', 'codecov', 'flask-sqlalchemy'],
},
test_suite='tests',
cmdclass={
'test': pytest
}
)

View File

@ -0,0 +1,18 @@
from flask import Flask
import pytest
from flask_executor import Executor
@pytest.fixture(params=['thread_push_app_context', 'thread_copy_app_context', 'process'])
def app(request):
app = Flask(__name__)
app.config['EXECUTOR_TYPE'] = 'process' if request.param == 'process' else 'thread'
app.config['EXECUTOR_PUSH_APP_CONTEXT'] = request.param == 'thread_push_app_context'
return app
@pytest.fixture
def default_app():
app = Flask(__name__)
return app

View File

@ -0,0 +1,376 @@
import concurrent
import concurrent.futures
import logging
import random
import time
from threading import local
import pytest
from flask import current_app, g, request
from flask_executor import Executor
from flask_executor.executor import propagate_exceptions_callback
# Reusable functions for tests
def fib(n):
if n <= 2:
return 1
else:
return fib(n - 1) + fib(n - 2)
def app_context_test_value(_=None):
return current_app.config['TEST_VALUE']
def request_context_test_value(_=None):
return request.test_value
def g_context_test_value(_=None):
return g.test_value
def fail():
time.sleep(0.1)
print(hello)
def test_init(app):
executor = Executor(app)
assert 'executor' in app.extensions
assert isinstance(executor, concurrent.futures._base.Executor)
assert isinstance(executor._self, concurrent.futures._base.Executor)
assert getattr(executor, 'shutdown')
def test_factory_init(app):
executor = Executor()
executor.init_app(app)
assert 'executor' in app.extensions
assert isinstance(executor._self, concurrent.futures._base.Executor)
def test_thread_executor_init(default_app):
default_app.config['EXECUTOR_TYPE'] = 'thread'
executor = Executor(default_app)
assert isinstance(executor._self, concurrent.futures.ThreadPoolExecutor)
assert isinstance(executor, concurrent.futures.ThreadPoolExecutor)
def test_process_executor_init(default_app):
default_app.config['EXECUTOR_TYPE'] = 'process'
executor = Executor(default_app)
assert isinstance(executor._self, concurrent.futures.ProcessPoolExecutor)
assert isinstance(executor, concurrent.futures.ProcessPoolExecutor)
def test_default_executor_init(default_app):
executor = Executor(default_app)
assert isinstance(executor._self, concurrent.futures.ThreadPoolExecutor)
def test_invalid_executor_init(default_app):
default_app.config['EXECUTOR_TYPE'] = 'invalid_value'
try:
executor = Executor(default_app)
except ValueError:
assert True
else:
assert False
def test_submit(app):
executor = Executor(app)
with app.test_request_context(''):
future = executor.submit(fib, 5)
assert future.result() == fib(5)
def test_max_workers(app):
EXECUTOR_MAX_WORKERS = 10
app.config['EXECUTOR_MAX_WORKERS'] = EXECUTOR_MAX_WORKERS
executor = Executor(app)
assert executor._max_workers == EXECUTOR_MAX_WORKERS
assert executor._self._max_workers == EXECUTOR_MAX_WORKERS
def test_thread_decorator_submit(default_app):
default_app.config['EXECUTOR_TYPE'] = 'thread'
executor = Executor(default_app)
@executor.job
def decorated(n):
return fib(n)
with default_app.test_request_context(''):
future = decorated.submit(5)
assert future.result() == fib(5)
def test_thread_decorator_submit_stored(default_app):
default_app.config['EXECUTOR_TYPE'] = 'thread'
executor = Executor(default_app)
@executor.job
def decorated(n):
return fib(n)
with default_app.test_request_context():
future = decorated.submit_stored('fibonacci', 35)
assert executor.futures.done('fibonacci') is False
assert future in executor.futures
executor.futures.pop('fibonacci')
assert future not in executor.futures
def test_thread_decorator_map(default_app):
iterable = list(range(5))
default_app.config['EXECUTOR_TYPE'] = 'thread'
executor = Executor(default_app)
@executor.job
def decorated(n):
return fib(n)
with default_app.test_request_context(''):
results = decorated.map(iterable)
for i, r in zip(iterable, results):
assert fib(i) == r
def test_process_decorator(default_app):
''' Using decorators should fail with a TypeError when using the ProcessPoolExecutor '''
default_app.config['EXECUTOR_TYPE'] = 'process'
executor = Executor(default_app)
try:
@executor.job
def decorated(n):
return fib(n)
except TypeError:
pass
else:
assert 0
def test_submit_app_context(default_app):
test_value = random.randint(1, 101)
default_app.config['TEST_VALUE'] = test_value
executor = Executor(default_app)
with default_app.test_request_context(''):
future = executor.submit(app_context_test_value)
assert future.result() == test_value
def test_submit_g_context_process(default_app):
test_value = random.randint(1, 101)
executor = Executor(default_app)
with default_app.test_request_context(''):
g.test_value = test_value
future = executor.submit(g_context_test_value)
assert future.result() == test_value
def test_submit_request_context(default_app):
test_value = random.randint(1, 101)
executor = Executor(default_app)
with default_app.test_request_context(''):
request.test_value = test_value
future = executor.submit(request_context_test_value)
assert future.result() == test_value
def test_map_app_context(default_app):
test_value = random.randint(1, 101)
iterator = list(range(5))
default_app.config['TEST_VALUE'] = test_value
executor = Executor(default_app)
with default_app.test_request_context(''):
results = executor.map(app_context_test_value, iterator)
for r in results:
assert r == test_value
def test_map_g_context_process(default_app):
test_value = random.randint(1, 101)
iterator = list(range(5))
executor = Executor(default_app)
with default_app.test_request_context(''):
g.test_value = test_value
results = executor.map(g_context_test_value, iterator)
for r in results:
assert r == test_value
def test_map_request_context(default_app):
test_value = random.randint(1, 101)
iterator = list(range(5))
executor = Executor(default_app)
with default_app.test_request_context('/'):
request.test_value = test_value
results = executor.map(request_context_test_value, iterator)
for r in results:
assert r == test_value
def test_executor_stored_future(default_app):
executor = Executor(default_app)
with default_app.test_request_context():
future = executor.submit_stored('fibonacci', fib, 35)
assert executor.futures.done('fibonacci') is False
assert future in executor.futures
executor.futures.pop('fibonacci')
assert future not in executor.futures
def test_set_max_futures(default_app):
default_app.config['EXECUTOR_FUTURES_MAX_LENGTH'] = 10
executor = Executor(default_app)
assert executor.futures.max_length == default_app.config['EXECUTOR_FUTURES_MAX_LENGTH']
def test_named_executor(default_app):
name = 'custom'
EXECUTOR_MAX_WORKERS = 5
CUSTOM_EXECUTOR_MAX_WORKERS = 10
default_app.config['EXECUTOR_MAX_WORKERS'] = EXECUTOR_MAX_WORKERS
default_app.config['CUSTOM_EXECUTOR_MAX_WORKERS'] = CUSTOM_EXECUTOR_MAX_WORKERS
executor = Executor(default_app)
custom_executor = Executor(default_app, name=name)
assert 'executor' in default_app.extensions
assert name + 'executor' in default_app.extensions
assert executor._self._max_workers == EXECUTOR_MAX_WORKERS
assert executor._max_workers == EXECUTOR_MAX_WORKERS
assert custom_executor._self._max_workers == CUSTOM_EXECUTOR_MAX_WORKERS
assert custom_executor._max_workers == CUSTOM_EXECUTOR_MAX_WORKERS
def test_named_executor_submit(app):
name = 'custom'
custom_executor = Executor(app, name=name)
with app.test_request_context(''):
future = custom_executor.submit(fib, 5)
assert future.result() == fib(5)
def test_named_executor_name(default_app):
name = 'invalid name'
try:
executor = Executor(default_app, name=name)
except ValueError:
assert True
else:
assert False
def test_default_done_callback(app):
executor = Executor(app)
def callback(future):
setattr(future, 'test', 'test')
executor.add_default_done_callback(callback)
with app.test_request_context('/'):
future = executor.submit(fib, 5)
concurrent.futures.wait([future])
assert hasattr(future, 'test')
def test_propagate_exception_callback(app, caplog):
caplog.set_level(logging.ERROR)
app.config['EXECUTOR_PROPAGATE_EXCEPTIONS'] = True
executor = Executor(app)
with pytest.raises(NameError):
with app.test_request_context('/'):
future = executor.submit(fail)
concurrent.futures.wait([future])
future.result()
def test_coerce_config_types(default_app):
default_app.config['EXECUTOR_MAX_WORKERS'] = '5'
default_app.config['EXECUTOR_FUTURES_MAX_LENGTH'] = '10'
default_app.config['EXECUTOR_PROPAGATE_EXCEPTIONS'] = 'true'
executor = Executor(default_app)
with default_app.test_request_context():
future = executor.submit_stored('fibonacci', fib, 35)
def test_shutdown_executor(default_app):
executor = Executor(default_app)
assert executor._shutdown is False
executor.shutdown()
assert executor._shutdown is True
def test_pre_init_executor(default_app):
executor = Executor()
@executor.job
def decorated(n):
return fib(n)
assert executor
executor.init_app(default_app)
with default_app.test_request_context(''):
future = decorated.submit(5)
assert future.result() == fib(5)
thread_local = local()
def set_thread_local():
if hasattr(thread_local, 'value'):
raise ValueError('thread local already present')
thread_local.value = True
def clear_thread_local(response_or_exc):
if hasattr(thread_local, 'value'):
del thread_local.value
return response_or_exc
def test_teardown_appcontext_is_called(default_app):
default_app.config['EXECUTOR_MAX_WORKERS'] = 1
default_app.config['EXECUTOR_PUSH_APP_CONTEXT'] = True
default_app.teardown_appcontext(clear_thread_local)
executor = Executor(default_app)
with default_app.test_request_context():
futures = [executor.submit(set_thread_local) for _ in range(2)]
concurrent.futures.wait(futures)
[propagate_exceptions_callback(future) for future in futures]
try:
import flask_sqlalchemy
except ImportError:
flask_sqlalchemy = None
@pytest.mark.skipif(flask_sqlalchemy is None, reason="flask_sqlalchemy not installed")
def test_sqlalchemy(default_app, caplog):
default_app.config['SQLALCHEMY_ENGINE_OPTIONS'] = {'echo_pool': 'debug', 'echo': 'debug'}
default_app.config['SQLALCHEMY_DATABASE_URI'] = 'sqlite:///:memory:'
default_app.config['SQLALCHEMY_TRACK_MODIFICATIONS'] = False
default_app.config['EXECUTOR_PUSH_APP_CONTEXT'] = True
default_app.config['EXECUTOR_MAX_WORKERS'] = 1
db = flask_sqlalchemy.SQLAlchemy(default_app)
def test_db():
return list(db.session.execute('select 1'))
executor = Executor(default_app)
with default_app.test_request_context():
for i in range(2):
with caplog.at_level('DEBUG'):
caplog.clear()
future = executor.submit(test_db)
concurrent.futures.wait([future])
future.result()
assert 'checked out from pool' in caplog.text
assert 'being returned to pool' in caplog.text

View File

@ -0,0 +1,97 @@
import concurrent.futures
import time
import pytest
from flask_executor import Executor
from flask_executor.futures import FutureCollection, FutureProxy
from flask_executor.helpers import InstanceProxy
def fib(n):
if n <= 2:
return 1
else:
return fib(n-1) + fib(n-2)
def test_plain_future():
executor = concurrent.futures.ThreadPoolExecutor(max_workers=1)
futures = FutureCollection()
future = executor.submit(fib, 33)
futures.add('fibonacci', future)
assert futures.done('fibonacci') is False
assert futures._state('fibonacci') is not None
assert future in futures
futures.pop('fibonacci')
assert future not in futures
def test_missing_future():
futures = FutureCollection()
assert futures.running('test') is None
def test_duplicate_add_future():
executor = concurrent.futures.ThreadPoolExecutor(max_workers=1)
futures = FutureCollection()
future = executor.submit(fib, 33)
futures.add('fibonacci', future)
try:
futures.add('fibonacci', future)
except ValueError:
assert True
else:
assert False
def test_futures_max_length():
executor = concurrent.futures.ThreadPoolExecutor(max_workers=1)
futures = FutureCollection(max_length=10)
future = executor.submit(pow, 2, 4)
futures.add(0, future)
assert future in futures
assert len(futures) == 1
for i in range(1, 11):
futures.add(i, executor.submit(pow, 2, 4))
assert len(futures) == 10
assert future not in futures
def test_future_proxy(default_app):
executor = Executor(default_app)
with default_app.test_request_context(''):
future = executor.submit(pow, 2, 4)
# Test if we're returning a subclass of Future
assert isinstance(future, concurrent.futures.Future)
assert isinstance(future, FutureProxy)
concurrent.futures.wait([future])
# test standard Future methods and attributes
assert future._state == concurrent.futures._base.FINISHED
assert future.done()
assert future.exception(timeout=0) is None
def test_add_done_callback(default_app):
"""Exceptions thrown in callbacks can't be easily caught and make it hard
to test for callback failure. To combat this, a global variable is used to
store the value of an exception and test for its existence.
"""
executor = Executor(default_app)
global exception
exception = None
with default_app.test_request_context(''):
future = executor.submit(time.sleep, 0.5)
def callback(future):
global exception
try:
executor.submit(time.sleep, 0)
except RuntimeError as e:
exception = e
future.add_done_callback(callback)
concurrent.futures.wait([future])
assert exception is None
def test_instance_proxy():
class TestProxy(InstanceProxy):
pass
x = TestProxy(concurrent.futures.Future())
assert isinstance(x, concurrent.futures.Future)
assert 'TestProxy' in repr(x)
assert 'Future' in repr(x)

View File

@ -0,0 +1,18 @@
#!/bin/bash
set -e
git clone https://github.com/python-restx/flask-restx opengnsys-flask-restx
cd opengnsys-flask-restx
git checkout 1.3.0
version=`python3 ./setup.py --version`
cd ..
if [ -d "opengnsys-flask-restx-${version}" ] ; then
echo "Directory opengnsys-flask-restx-${version} already exists, won't overwrite"
exit 1
else
rm -rf opengnsys-flask-restx/.git
mv opengnsys-flask-restx "opengnsys-flask-restx-${version}"
tar -c --xz -v -f "opengnsys-flask-restx_${version}.orig.tar.xz" "opengnsys-flask-restx-${version}"
fi

View File

@ -0,0 +1,21 @@
# EditorConfig is awesome: https://EditorConfig.org
# top-most EditorConfig file
root = true
# Unix-style newlines with a newline ending every file
[*]
end_of_line = lf
insert_final_newline = true
trim_trailing_whitespace = true
# Matches multiple files with brace expansion notation
# Set default charset
[*.{js,py}]
charset = utf-8
# 4 space indentation
[*.py]
indent_style = space
indent_size = 4
max_line_length = 120

View File

@ -0,0 +1,44 @@
---
name: Bug Report
about: Tell us how Flask-RESTX is broken
title: ''
labels: bug
assignees: ''
---
### ***** **BEFORE LOGGING AN ISSUE** *****
- Is this something you can **debug and fix**? Send a pull request! Bug fixes and documentation fixes are welcome.
- Please check if a similar issue already exists or has been closed before. Seriously, nobody here is getting paid. Help us out and take five minutes to make sure you aren't submitting a duplicate.
- Please review the [guidelines for contributing](https://github.com/python-restx/flask-restx/blob/master/CONTRIBUTING.rst)
### **Code**
```python
from your_code import your_buggy_implementation
```
### **Repro Steps** (if applicable)
1. ...
2. ...
3. Broken!
### **Expected Behavior**
A description of what you expected to happen.
### **Actual Behavior**
A description of the unexpected, buggy behavior.
### **Error Messages/Stack Trace**
If applicable, add the stack trace produced by the error
### **Environment**
- Python version
- Flask version
- Flask-RESTX version
- Other installed Flask extensions
### **Additional Context**
This is your last chance to provide any pertinent details, don't let this opportunity pass you by!

View File

@ -0,0 +1,20 @@
---
name: Feature request
about: Suggest an idea for this project
title: ''
labels: enhancement
assignees: ''
---
**Is your feature request related to a problem? Please describe.**
A clear and concise description of what the problem is. Ex. I'm always frustrated when [...]
**Describe the solution you'd like**
A clear and concise description of what you want to happen.
**Describe alternatives you've considered**
A clear and concise description of any alternative solutions or features you've considered.
**Additional context**
Add any other context or screenshots about the feature request here.

View File

@ -0,0 +1,14 @@
---
name: Question
about: Ask a question
title: ''
labels: question
assignees: ''
---
**Ask a question**
A clear and concise question
**Additional context**
Add any other context or screenshots about the feature request here.

View File

@ -0,0 +1,25 @@
## Proposed changes
At a high level, describe your reasoning for making these changes. If you are fixing a bug or resolving a feature request, **please include a link to the issue**.
## Types of changes
What types of changes does your code introduce?
_Put an `x` in the boxes that apply_
- [ ] Bugfix (non-breaking change which fixes an issue)
- [ ] New feature (non-breaking change which adds functionality)
- [ ] Breaking change (fix or feature that would cause existing functionality to not work as expected)
## Checklist
_Put an `x` in the boxes that apply. You can also fill these out after creating the PR. If you're unsure about any of them, don't hesitate to ask. We're here to help! This is simply a reminder of what we are going to look for before merging your code._
- [ ] I have read the [guidelines for contributing](https://github.com/python-restx/flask-restx/blob/master/CONTRIBUTING.rst)
- [ ] All unit tests pass on my local version with my changes
- [ ] I have added tests that prove my fix is effective or that my feature works
- [ ] I have added necessary documentation (if appropriate)
## Further comments
If this is a relatively large or complex change, kick off the discussion by explaining why you chose the solution you did and what alternatives you considered, etc...

View File

@ -0,0 +1,10 @@
name: Lint
on: [push, pull_request]
jobs:
lint:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- uses: psf/black@stable

View File

@ -0,0 +1,28 @@
name: Release
on:
push:
tags:
- "*"
jobs:
build:
runs-on: ubuntu-latest
steps:
- name: Set up Python 3.8
uses: actions/setup-python@v1
with:
python-version: 3.8
- name: Checkout code
uses: actions/checkout@v2
- name: Install dependencies
run: |
python -m pip install --upgrade pip
pip install ".[dev]" wheel
- name: Fetch web assets
run: inv assets
- name: Publish
env:
TWINE_USERNAME: "__token__"
TWINE_PASSWORD: ${{ secrets.PYPI_PASSWORD }}
run: |
python setup.py sdist bdist_wheel
twine upload dist/*

View File

@ -0,0 +1,74 @@
name: Tests
on:
pull_request:
branches:
- "*"
push:
branches:
- "*"
schedule:
- cron: "0 1 * * *"
workflow_dispatch:
jobs:
unit-tests:
runs-on: ubuntu-latest
strategy:
fail-fast: false
matrix:
python-version: ["3.8", "3.9", "3.10", "3.11", "pypy3.8", "3.12"]
flask: ["<3.0.0", ">=3.0.0"]
steps:
- name: Set up Python ${{ matrix.python-version }}
uses: actions/setup-python@v4
with:
python-version: ${{ matrix.python-version }}
allow-prereleases: true
- name: Checkout code
uses: actions/checkout@v3
- name: Install dependencies
run: |
python -m pip install --upgrade pip
pip install "flask${{ matrix.flask }}"
pip install ".[test]"
- name: Test with inv
run: inv cover qa
- name: Codecov
uses: codecov/codecov-action@v1
with:
file: ./coverage.xml
bench:
needs: unit-tests
runs-on: ubuntu-latest
if: github.event_name == 'pull_request'
steps:
- name: Set up Python 3.8
uses: actions/setup-python@v4
with:
python-version: "3.8"
- name: Checkout ${{ github.base_ref }}
uses: actions/checkout@v3
with:
ref: ${{ github.base_ref}}
path: base
- name: Checkout ${{ github.ref }}
uses: actions/checkout@v3
with:
ref: ${{ github.ref}}
path: ref
- name: Install dev dependencies
run: |
python -m pip install --upgrade pip
pip install -e "base[dev]"
- name: Install ci dependencies for ${{ github.base_ref }}
run: pip install -e "base[ci]"
- name: Benchmarks for ${{ github.base_ref }}
run: |
cd base
inv benchmark --max-time 4 --save
mv .benchmarks ../ref/
- name: Install ci dependencies for ${{ github.ref }}
run: pip install -e "ref[ci]"
- name: Benchmarks for ${{ github.ref }}
run: |
cd ref
inv benchmark --max-time 4 --compare

View File

@ -0,0 +1,70 @@
# Byte-compiled / optimized / DLL files
__pycache__/
*.py[cod]
# C extensions
*.so
# Distribution / packaging
.Python
env/
bin/
build/
develop-eggs/
dist/
eggs/
lib/
lib64/
parts/
sdist/
var/
cover
*.egg-info/
.installed.cfg
*.egg
# Installer logs
pip-log.txt
pip-delete-this-directory.txt
# Unit test / coverage reports
htmlcov/
.tox/
.coverage
.cache
nosetests.xml
coverage.xml
prof/
histograms/
.benchmarks
# Translations
*.mo
# Atom
*.cson
# Mr Developer
.mr.developer.cfg
.project
.pydevproject
# Rope
.ropeproject
# Django stuff:
*.log
*.pot
# Sphinx documentation
doc/_build/
# Specifics
flask_restx/static
node_modules
# pyenv
.python-version
# Jet Brains
.idea

View File

@ -0,0 +1,63 @@
# configure updates globally
# default: all
# allowed: all, insecure, False
# update: all
# configure dependency pinning globally
# default: True
# allowed: True, False
pin: False
# set the default branch
# default: empty, the default branch on GitHub
# branch: dev
# update schedule
# default: empty
# allowed: "every day", "every week", ..
# schedule: "every day"
# search for requirement files
# default: True
# allowed: True, False
# search: True
# Specify requirement files by hand, default is empty
# default: empty
# allowed: list
# requirements:
# - requirements/staging.txt:
# # update all dependencies and pin them
# update: all
# pin: True
# - requirements/dev.txt:
# # don't update dependencies, use global 'pin' default
# update: False
# - requirements/prod.txt:
# # update insecure only, pin all
# update: insecure
# pin: True
# add a label to pull requests, default is not set
# requires private repo permissions, even on public repos
# default: empty
label_prs: update
# assign users to pull requests, default is not set
# requires private repo permissions, even on public repos
# default: empty
# assignees:
# - carl
# - carlsen
# configure the branch prefix the bot is using
# default: pyup-
branch_prefix: pyup/
# set a global prefix for PRs
# default: empty
pr_prefix: "[PyUP]"
# allow to close stale PRs
# default: True
close_prs: True

View File

@ -0,0 +1,342 @@
Flask-RestX Changelog
=====================
Basic structure is
::
ADD LINK (..) _section-VERSION
VERSION
-------
ADD LINK (..) _bug_fixes-VERSION OR _enhancments-VERSION
Bug Fixes or Enchancements
~~~~~~~~~~~~~~~~~~~~~~~~~~
* Message (TICKET) [CONTRIBUTOR]
Opening a release
-----------------
If youre the first contributor, add a new semver release to the
document. Place your addition in the correct category, giving a short
description (matching something in a git commit), the issue ID (or PR ID
if no issue opened), and your Github username for tracking contributors!
Releases prior to 0.3.0 were “best effort” filled out, but are missing
some info. If you see your contribution missing info, please open a PR
on the Changelog!
.. _section-1.3.0:
1.3.0
-----
.. _bug_fixes-1.3.0
Bug Fixes
~~~~~~~~~
::
* Fixing werkzeug 3 deprecated version import. Import is replaced by new style version check with importlib (#573) [Ryu-CZ]
* Fixing flask 3.0+ compatibility of `ModuleNotFoundError: No module named 'flask.scaffold'` Import error. (#567) [Ryu-CZ]
* Fix wrong status code and message on responses when handling `HTTPExceptions` (#569) [lkk7]
* Add flask 2 and flask 3 to testing matrix. [foarsitter]
* Update internally pinned pytest-flask to 1.3.0 for Flask >=3.0.0 support. [peter-doggart]
* Python 3.12 support. [foarsitter]
* Fix wrong status code and message on responses when handling HTTPExceptions. [ikk7]
* Update changelog Flask version table. [peter-doggart]
* Remove temporary package version restrictions for flask < 3.0.0, werkzeug and jsonschema (jsonschema future deprecation warning remains. See #553). [peter-doggart]
.. _section-1.2.0:
1.2.0
-----
.. _bug_fixes-1.2.0
Bug Fixes
~~~~~~~~~
::
* Fixing test as HTTP Header MIMEAccept expects quality-factor number in form of `X.X` (#547) [chipndell]
* Introduce temporary restrictions on some package versions. (`flask<3.0.0`, `werkzeug<3.0.0`, `jsonschema<=4.17.3`) [peter-doggart]
.. _enhancements-1.2.0:
Enhancements
~~~~~~~~~~~~
::
* Drop support for python 3.7
.. _section-1.1.0:
1.1.0
-----
.. _bug_fixes-1.1.0
Bug Fixes
~~~~~~~~~
::
* Update Swagger-UI to latest version to fix several security vulnerabiltiies. [peter-doggart]
* Add a warning to the docs that nested Blueprints are not supported. [peter-doggart]
* Add a note to the docs that flask-restx always registers the root (/) path. [peter-doggart]
.. _section-1.0.6:
1.0.6
-----
.. _bug_fixes-1.0.6
Bug Fixes
~~~~~~~~~
::
* Update Black to 2023 version [peter-doggart]
* Fix minor bug introduced in 1.0.5 that changed the behaviour of how flask-restx propagates exceptions. (#512) [peter-doggart]
* Update PyPi classifer to Production/Stable. [peter-doggart]
* Add support for Python 3.11 (requires update to invoke ^2.0.0) [peter-doggart]
.. _section-1.0.5:
1.0.5
-----
.. _bug_fixes-1.0.5
Bug Fixes
~~~~~~~~~
::
* Fix failing pypy python setup in github actions
* Fix compatibility with upcoming release of Flask 2.3+. (#485) [jdieter]
.. _section-1.0.2:
1.0.2
-----
.. _bug_fixes-1.0.2
Bug Fixes
~~~~~~~~~
::
* Properly remove six dependency
.. _section-1.0.1:
1.0.1
-----
.. _breaking-1.0.1
Breaking
~~~~~~~~
Starting from this release, we only support python versions >= 3.7
.. _bug_fixes-1.0.1
Bug Fixes
~~~~~~~~~
::
* Fix compatibility issue with werkzeug 2.1.0 (#423) [stacywsmith]
.. _enhancements-1.0.1:
Enhancements
~~~~~~~~~~~~
::
* Drop support for python <3.7
.. _section-0.5.1:
0.5.1
-----
.. _bug_fixes-0.5.1
Bug Fixes
~~~~~~~~~
::
* Optimize email regex (#372) [kevinbackhouse]
.. _section-0.5.0:
0.5.0
-----
.. _bug_fixes-0.5.0
Bug Fixes
~~~~~~~~~
::
* Fix Marshaled nested wildcard field with ordered=True (#326) [bdscharf]
* Fix Float Field Handling of None (#327) [bdscharf, TVLIgnacy]
* Fix Werkzeug and Flask > 2.0 issues (#341) [hbusul]
* Hotfix package.json [xuhdev]
.. _enhancements-0.5.0:
Enhancements
~~~~~~~~~~~~
::
* Stop calling got_request_exception when handled explicitly (#349) [chandlernine, VolkaRancho]
* Update doc links (#332) [EtiennePelletier]
* Structure demo zoo app (#328) [mehul-anshumali]
* Update Contributing.rst (#323) [physikerwelt]
* Upgrade swagger-ui (#316) [xuhdev]
.. _section-0.4.0:
0.4.0
-----
.. _bug_fixes-0.4.0
Bug Fixes
~~~~~~~~~
::
* Fix Namespace error handlers when propagate_exceptions=True (#285) [mjreiss]
* pin flask and werkzeug due to breaking changes (#308) [jchittum]
* The Flask/Blueprint API moved to the Scaffold base class (#308) [jloehel]
.. _enhancements-0.4.0:
Enhancements
~~~~~~~~~~~~
::
* added specs-url-scheme option for API (#237) [DustinMoriarty]
* Doc enhancements [KAUTH, Abdur-rahmaanJ]
* New example with loosely couple implementation [maurerle]
.. _section-0.3.0:
0.3.0
-----
.. _bug_fixes-0.3.0:
Bug Fixes
~~~~~~~~~
::
* Make error handlers order of registration respected when handling errors (#202) [avilaton]
* add prefix to config setting (#114) [heeplr]
* Doc fixes [openbrian, mikhailpashkov, rich0rd, Rich107, kashyapm94, SteadBytes, ziirish]
* Use relative path for `api.specs_url` (#188) [jslay88]
* Allow example=False (#203) [ogenstad]
* Add support for recursive models (#110) [peterjwest, buggyspace, Drarok, edwardfung123]
* generate choices schema without collectionFormat (#164) [leopold-p]
* Catch TypeError in marshalling (#75) [robyoung]
* Unable to access nested list propert (#91) [arajkumar]
.. _enhancements-0.3.0:
Enhancements
~~~~~~~~~~~~
::
* Update Python versions [johnthagen]
* allow strict mode when validating model fields (#186) [maho]
* Make it possible to include "unused" models in the generated swagger documentation (#90)[volfpeter]
.. _section-0.2.0:
0.2.0
-----
This release properly fixes the issue raised by the release of werkzeug
1.0.
.. _bug-fixes-0.2.0:
Bug Fixes
~~~~~~~~~
::
* Remove deprecated werkzeug imports (#35)
* Fix OrderedDict imports (#54)
* Fixing Swagger Issue when using @api.expect() on a request parser (#20)
.. _enhancements-0.2.0:
Enhancements
~~~~~~~~~~~~
::
* use black to enforce a formatting codestyle (#60)
* improve test workflows
.. _section-0.1.1:
0.1.1
-----
This release is mostly a hotfix release to address incompatibility issue
with the recent release of werkzeug 1.0.
.. _bug-fixes-0.1.1:
Bug Fixes
~~~~~~~~~
::
* pin werkzeug version (#39)
* register wildcard fields in docs (#24)
* update package.json version accordingly with the flask-restx version and update the author (#38)
.. _enhancements-0.1.1:
Enhancements
~~~~~~~~~~~~
::
* use github actions instead of travis-ci (#18)
.. _section-0.1.0:
0.1.0
-----
.. _bug-fixes-0.1.0:
Bug Fixes
~~~~~~~~~
::
* Fix exceptions/error handling bugs https://github.com/noirbizarre/flask-restplus/pull/706/files noirbizarre/flask-restplus#741
* Fix illegal characters in JSON references to model names noirbizarre/flask-restplus#653
* Support envelope parameter in Swagger documentation noirbizarre/flask-restplus#673
* Fix polymorph field ambiguity noirbizarre/flask-restplus#691
* Fix wildcard support for fields.Nested and fields.List noirbizarre/flask-restplus#739
.. _enhancements-0.1.0:
Enhancements
~~~~~~~~~~~~
::
* Api/Namespace individual loggers noirbizarre/flask-restplus#708
* Various deprecated import changes noirbizarre/flask-restplus#732 noirbizarre/flask-restplus#738
* Start the Flask-RESTX fork!
* Rename all the things (#2 #9)
* Set up releases from CI (#12)
* Not a library enhancement but this was much needed - thanks @ziirish !

View File

@ -0,0 +1,135 @@
Contributing
============
flask-restx is open-source and very open to contributions.
If you're part of a corporation with an NDA, and you may require updating the license.
See Updating Copyright below
Submitting issues
-----------------
Issues are contributions in a way so don't hesitate
to submit reports on the `official bugtracker`_.
Provide as much informations as possible to specify the issues:
- the flask-restx version used
- a stacktrace
- installed applications list
- a code sample to reproduce the issue
- ...
Submitting patches (bugfix, features, ...)
------------------------------------------
If you want to contribute some code:
1. fork the `official flask-restx repository`_
2. Ensure an issue is opened for your feature or bug
3. create a branch with an explicit name (like ``my-new-feature`` or ``issue-XX``)
4. do your work in it
5. Commit your changes. Ensure the commit message includes the issue. Also, if contributing from a corporation, be sure to add a comment with the Copyright information
6. rebase it on the master branch from the official repository (cleanup your history by performing an interactive rebase)
7. add your change to the changelog
8. submit your pull-request
9. 2 Maintainers should review the code for bugfix and features. 1 maintainer for minor changes (such as docs)
10. After review, a maintainer a will merge the PR. Maintainers should not merge their own PRs
There are some rules to follow:
- your contribution should be documented (if needed)
- your contribution should be tested and the test suite should pass successfully
- your code should be properly formatted (use ``black .`` to format)
- your contribution should support both Python 2 and 3 (use ``tox`` to test)
You need to install some dependencies to develop on flask-restx:
.. code-block:: console
$ pip install -e .[dev]
An `Invoke <https://www.pyinvoke.org/>`_ ``tasks.py`` is provided to simplify the common tasks:
.. code-block:: console
$ inv -l
Available tasks:
all Run tests, reports and packaging
assets Fetch web assets -- Swagger. Requires NPM (see below)
clean Cleanup all build artifacts
cover Run tests suite with coverage
demo Run the demo
dist Package for distribution
doc Build the documentation
qa Run a quality report
test Run tests suite
tox Run tests against Python versions
To ensure everything is fine before submission, use ``tox``.
It will run the test suite on all the supported Python version
and ensure the documentation is generating.
.. code-block:: console
$ tox
You also need to ensure your code is compliant with the flask-restx coding standards:
.. code-block:: console
$ inv qa
To ensure everything is fine before committing, you can launch the all in one command:
.. code-block:: console
$ inv qa tox
It will ensure the code meet the coding conventions, runs on every version on python
and the documentation is properly generating.
.. _official flask-restx repository: https://github.com/python-restx/flask-restx
.. _official bugtracker: https://github.com/python-restx/flask-restx/issues
Running a local Swagger Server
------------------------------
For local development, you may wish to run a local server. running the following will install a swagger server
.. code-block:: console
$ inv assets
NOTE: You'll need `NPM <https://docs.npmjs.com/getting-started/>`_ installed to do this.
If you're new to NPM, also check out `nvm <https://github.com/creationix/nvm/blob/master/README.md>`_
Release process
---------------
The new releases are pushed on `Pypi.org <https://pypi.org/>`_ automatically
from `GitHub Actions <https://github.com/python-restx/flask-restx/actions?query=workflow%3ARelease>`_ when we add a new tag (unless the
tests are failing).
In order to prepare a new release, you can use `bumpr <https://github.com/noirbizarre/bumpr>`_
which automates a few things.
You first need to install it, then run the ``bumpr`` command. You can then refer
to the `documentation <https://bumpr.readthedocs.io/en/latest/commandline.html>`_
for further details.
For instance, you would run ``bumpr -m`` (replace ``-m`` with ``-p`` or ``-M``
depending the expected version).
Updating Copyright
------------------
If you're a part of a corporation with an NDA, you may be required to update the
LICENSE file. This should be discussed and agreed upon by the project maintainers.
1. Check with your legal department first.
2. Add an appropriate line to the LICENSE file.
3. When making a commit, add the specific copyright notice.
Double check with your legal department about their regulations. Not all changes
constitute new or unique work.

View File

@ -0,0 +1,32 @@
BSD 3-Clause License
Original work Copyright (c) 2013 Twilio, Inc
Modified work Copyright (c) 2014 Axel Haustant
Modified work Copyright (c) 2020 python-restx Authors
All rights reserved.
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions are met:
* Redistributions of source code must retain the above copyright notice, this
list of conditions and the following disclaimer.
* Redistributions in binary form must reproduce the above copyright notice,
this list of conditions and the following disclaimer in the documentation
and/or other materials provided with the distribution.
* Neither the name of the copyright holder nor the names of its
contributors may be used to endorse or promote products derived from
this software without specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE
FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.

View File

@ -0,0 +1,5 @@
include README.rst MANIFEST.in LICENSE
recursive-include flask_restx *
recursive-include requirements *.pip
global-exclude *.pyc

View File

@ -0,0 +1,216 @@
===========
Flask RESTX
===========
.. image:: https://github.com/python-restx/flask-restx/workflows/Tests/badge.svg?tag=1.3.0&event=push
:target: https://github.com/python-restx/flask-restx/actions?query=workflow%3ATests
:alt: Tests status
.. image:: https://codecov.io/gh/python-restx/flask-restx/branch/master/graph/badge.svg
:target: https://codecov.io/gh/python-restx/flask-restx
:alt: Code coverage
.. image:: https://readthedocs.org/projects/flask-restx/badge/?version=1.3.0
:target: https://flask-restx.readthedocs.io/en/1.3.0/
:alt: Documentation status
.. image:: https://img.shields.io/pypi/l/flask-restx.svg
:target: https://pypi.org/project/flask-restx
:alt: License
.. image:: https://img.shields.io/pypi/pyversions/flask-restx.svg
:target: https://pypi.org/project/flask-restx
:alt: Supported Python versions
.. image:: https://badges.gitter.im/Join%20Chat.svg
:target: https://gitter.im/python-restx?utm_source=badge&utm_medium=badge&utm_campaign=pr-badge&utm_content=badge
:alt: Join the chat at https://gitter.im/python-restx
.. image:: https://img.shields.io/badge/code%20style-black-000000.svg
:target: https://github.com/psf/black
:alt: Code style: black
Flask-RESTX is a community driven fork of `Flask-RESTPlus <https://github.com/noirbizarre/flask-restplus>`_.
Flask-RESTX is an extension for `Flask`_ that adds support for quickly building REST APIs.
Flask-RESTX encourages best practices with minimal setup.
If you are familiar with Flask, Flask-RESTX should be easy to pick up.
It provides a coherent collection of decorators and tools to describe your API
and expose its documentation properly using `Swagger`_.
Compatibility
=============
Flask-RESTX requires Python 3.8+.
On Flask Compatibility
======================
Flask and Werkzeug moved to versions 2.0 in March 2020. This caused a breaking change in Flask-RESTX.
.. list-table:: RESTX and Flask / Werkzeug Compatibility
:widths: 25 25 25
:header-rows: 1
* - Flask-RESTX version
- Flask version
- Note
* - <= 0.3.0
- < 2.0.0
- unpinned in Flask-RESTX. Pin your projects!
* - == 0.4.0
- < 2.0.0
- pinned in Flask-RESTX.
* - >= 0.5.0
- < 3.0.0
- unpinned, import statements wrapped for compatibility
* - == 1.2.0
- < 3.0.0
- pinned in Flask-RESTX.
* - >= 1.3.0
- >= 2.0.0 (Flask >= 3.0.0 support)
- unpinned, import statements wrapped for compatibility
* - trunk branch in Github
- >= 2.0.0 (Flask >= 3.0.0 support)
- unpinned, will address issues faster than releases.
Installation
============
You can install Flask-RESTX with pip:
.. code-block:: console
$ pip install flask-restx
or with easy_install:
.. code-block:: console
$ easy_install flask-restx
Quick start
===========
With Flask-RESTX, you only import the api instance to route and document your endpoints.
.. code-block:: python
from flask import Flask
from flask_restx import Api, Resource, fields
app = Flask(__name__)
api = Api(app, version='1.0', title='TodoMVC API',
description='A simple TodoMVC API',
)
ns = api.namespace('todos', description='TODO operations')
todo = api.model('Todo', {
'id': fields.Integer(readonly=True, description='The task unique identifier'),
'task': fields.String(required=True, description='The task details')
})
class TodoDAO(object):
def __init__(self):
self.counter = 0
self.todos = []
def get(self, id):
for todo in self.todos:
if todo['id'] == id:
return todo
api.abort(404, "Todo {} doesn't exist".format(id))
def create(self, data):
todo = data
todo['id'] = self.counter = self.counter + 1
self.todos.append(todo)
return todo
def update(self, id, data):
todo = self.get(id)
todo.update(data)
return todo
def delete(self, id):
todo = self.get(id)
self.todos.remove(todo)
DAO = TodoDAO()
DAO.create({'task': 'Build an API'})
DAO.create({'task': '?????'})
DAO.create({'task': 'profit!'})
@ns.route('/')
class TodoList(Resource):
'''Shows a list of all todos, and lets you POST to add new tasks'''
@ns.doc('list_todos')
@ns.marshal_list_with(todo)
def get(self):
'''List all tasks'''
return DAO.todos
@ns.doc('create_todo')
@ns.expect(todo)
@ns.marshal_with(todo, code=201)
def post(self):
'''Create a new task'''
return DAO.create(api.payload), 201
@ns.route('/<int:id>')
@ns.response(404, 'Todo not found')
@ns.param('id', 'The task identifier')
class Todo(Resource):
'''Show a single todo item and lets you delete them'''
@ns.doc('get_todo')
@ns.marshal_with(todo)
def get(self, id):
'''Fetch a given resource'''
return DAO.get(id)
@ns.doc('delete_todo')
@ns.response(204, 'Todo deleted')
def delete(self, id):
'''Delete a task given its identifier'''
DAO.delete(id)
return '', 204
@ns.expect(todo)
@ns.marshal_with(todo)
def put(self, id):
'''Update a task given its identifier'''
return DAO.update(id, api.payload)
if __name__ == '__main__':
app.run(debug=True)
Contributors
============
Flask-RESTX is brought to you by @python-restx. Since early 2019 @SteadBytes,
@a-luna, @j5awry, @ziirish volunteered to help @python-restx keep the project up
and running, they did so for a long time! Since the beginning of 2023, the project
is maintained by @peter-doggart with help from @ziirish.
Of course everyone is welcome to contribute and we will be happy to review your
PR's or answer to your issues.
Documentation
=============
The documentation is hosted `on Read the Docs <http://flask-restx.readthedocs.io/en/latest/>`_
.. _Flask: https://flask.palletsprojects.com/
.. _Swagger: https://swagger.io/
Contribution
============
Want to contribute! That's awesome! Check out `CONTRIBUTING.rst! <https://github.com/python-restx/flask-restx/blob/master/CONTRIBUTING.rst>`_

View File

@ -0,0 +1,25 @@
[bumpr]
file = flask_restx/__about__.py
vcs = git
commit = true
tag = true
push = true
tests = tox -e py38
clean =
inv clean
files =
README.rst
[bump]
unsuffix = true
[prepare]
part = patch
suffix = dev
[readthedoc]
id = flask-restx
[replace]
dev = ?branch=master
stable = ?tag={version}

View File

@ -0,0 +1,25 @@
[run]
source = flask_restx
branch = True
omit =
/tests/*
[report]
# Regexes for lines to exclude from consideration
exclude_lines =
# Have to re-enable the standard pragma
pragma: no cover
# Don't complain about missing debug-only code:
def __repr__
if self\.debug
# Don't complain if tests don't hit defensive assertion code:
raise AssertionError
raise NotImplementedError
# Don't complain if non-runnable code isn't run:
if 0:
if __name__ == .__main__.:
ignore_errors = True

View File

@ -0,0 +1,7 @@
opengnsys-flask-restx (1.3.0) UNRELEASED; urgency=medium
Initial version
*
*
-- Vadim Troshchinskiy <vtroshchinskiy@qindel.com> Tue, 23 Dec 2024 10:47:04 +0000

View File

@ -0,0 +1,34 @@
Source: opengnsys-flask-restx
Maintainer: OpenGnsys <opengnsys@opengnsys.org>
Section: python
Priority: optional
Build-Depends: debhelper-compat (= 12),
dh-python,
libarchive-dev,
python3-all,
python3-mock,
python3-pytest,
python3-setuptools,
python3-aniso8601,
faker,
python3-importlib-resources,
python3-pytest-flask,
python3-pytest-mock,
python3-pytest-benchmark
Standards-Version: 4.5.0
Rules-Requires-Root: no
Homepage: https://github.com/vojtechtrefny/pyblkid
Vcs-Browser: https://github.com/vojtechtrefny/pyblkid
Vcs-Git: https://github.com/vojtechtrefny/pyblkid
Package: opengnsys-flask-restx
Architecture: all
Depends: ${lib:Depends}, ${misc:Depends}, ${python3:Depends}
Description: Flask-RESTX is a community driven fork of Flask-RESTPlus.
Flask-RESTX is an extension for Flask that adds support for quickly building
REST APIs. Flask-RESTX encourages best practices with minimal setup.
.
If you are familiar with Flask, Flask-RESTX should be easy to pick up.
It provides a coherent collection of decorators and tools to describe your
API and expose its documentation properly using Swagger.
.

View File

@ -0,0 +1,208 @@
Format: https://www.debian.org/doc/packaging-manuals/copyright-format/1.0/
Upstream-Name: python-libarchive-c
Source: https://github.com/Changaco/python-libarchive-c
Files: *
Copyright: 2014-2018 Changaco <changaco@changaco.oy.lc>
License: CC-0
Files: tests/surrogateescape.py
Copyright: 2015 Changaco <changaco@changaco.oy.lc>
2011-2013 Victor Stinner <victor.stinner@gmail.com>
License: BSD-2-clause or PSF-2
Files: debian/*
Copyright: 2015 Jerémy Bobbio <lunar@debian.org>
2019 Mattia Rizzolo <mattia@debian.org>
License: permissive
Copying and distribution of this package, with or without
modification, are permitted in any medium without royalty
provided the copyright notice and this notice are
preserved.
License: BSD-2-clause
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions
are met:
* Redistributions of source code must retain the above copyright
notice, this list of conditions and the following disclaimer.
* Redistributions in binary form must reproduce the above copyright
notice, this list of conditions and the following disclaimer in
the documentation and/or other materials provided with the
distribution.
.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
"AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS
FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE
COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT,
INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING,
BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS
OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED
AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT
OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF
SUCH DAMAGE.
License: PSF-2
1. This LICENSE AGREEMENT is between the Python Software Foundation ("PSF"),
and the Individual or Organization ("Licensee") accessing and otherwise using
this software ("Python") in source or binary form and its associated
documentation.
.
2. Subject to the terms and conditions of this License Agreement, PSF hereby
grants Licensee a nonexclusive, royalty-free, world-wide license to
reproduce, analyze, test, perform and/or display publicly, prepare derivative
works, distribute, and otherwise use Python alone or in any derivative
version, provided, however, that PSF's License Agreement and PSF's notice of
copyright, i.e., "Copyright (c) 2001, 2002, 2003, 2004, 2005, 2006 Python
Software Foundation; All Rights Reserved" are retained in Python alone or in
any derivative version prepared by Licensee.
.
3. In the event Licensee prepares a derivative work that is based on or
incorporates Python or any part thereof, and wants to make the derivative
work available to others as provided herein, then Licensee hereby agrees to
include in any such work a brief summary of the changes made to Python.
.
4. PSF is making Python available to Licensee on an "AS IS" basis. PSF MAKES
NO REPRESENTATIONS OR WARRANTIES, EXPRESS OR IMPLIED. BY WAY OF EXAMPLE, BUT
NOT LIMITATION, PSF MAKES NO AND DISCLAIMS ANY REPRESENTATION OR WARRANTY OF
MERCHANTABILITY OR FITNESS FOR ANY PARTICULAR PURPOSE OR THAT THE USE OF
PYTHON WILL NOT INFRINGE ANY THIRD PARTY RIGHTS.
.
5. PSF SHALL NOT BE LIABLE TO LICENSEE OR ANY OTHER USERS OF PYTHON FOR ANY
INCIDENTAL, SPECIAL, OR CONSEQUENTIAL DAMAGES OR LOSS AS A RESULT OF
MODIFYING, DISTRIBUTING, OR OTHERWISE USING PYTHON, OR ANY DERIVATIVE
THEREOF, EVEN IF ADVISED OF THE POSSIBILITY THEREOF.
.
6. This License Agreement will automatically terminate upon a material breach
of its terms and conditions.
.
7. Nothing in this License Agreement shall be deemed to create any
relationship of agency, partnership, or joint venture between PSF and
Licensee. This License Agreement does not grant permission to use PSF
trademarks or trade name in a trademark sense to endorse or promote products
or services of Licensee, or any third party.
.
8. By copying, installing or otherwise using Python, Licensee agrees to be
bound by the terms and conditions of this License Agreement.
License: CC-0
Statement of Purpose
.
The laws of most jurisdictions throughout the world automatically
confer exclusive Copyright and Related Rights (defined below) upon
the creator and subsequent owner(s) (each and all, an "owner") of an
original work of authorship and/or a database (each, a "Work").
.
Certain owners wish to permanently relinquish those rights to a Work
for the purpose of contributing to a commons of creative, cultural
and scientific works ("Commons") that the public can reliably and
without fear of later claims of infringement build upon, modify,
incorporate in other works, reuse and redistribute as freely as
possible in any form whatsoever and for any purposes, including
without limitation commercial purposes. These owners may contribute
to the Commons to promote the ideal of a free culture and the further
production of creative, cultural and scientific works, or to gain
reputation or greater distribution for their Work in part through the
use and efforts of others.
.
For these and/or other purposes and motivations, and without any
expectation of additional consideration or compensation, the person
associating CC0 with a Work (the "Affirmer"), to the extent that he
or she is an owner of Copyright and Related Rights in the Work,
voluntarily elects to apply CC0 to the Work and publicly distribute
the Work under its terms, with knowledge of his or her Copyright and
Related Rights in the Work and the meaning and intended legal effect
of CC0 on those rights.
.
1. Copyright and Related Rights. A Work made available under CC0 may
be protected by copyright and related or neighboring rights
("Copyright and Related Rights"). Copyright and Related Rights
include, but are not limited to, the following:
.
i. the right to reproduce, adapt, distribute, perform, display,
communicate, and translate a Work;
ii. moral rights retained by the original author(s) and/or
performer(s);
iii. publicity and privacy rights pertaining to a person's image
or likeness depicted in a Work;
iv. rights protecting against unfair competition in regards to a
Work, subject to the limitations in paragraph 4(a), below;
v. rights protecting the extraction, dissemination, use and
reuse of data in a Work;
vi. database rights (such as those arising under Directive
96/9/EC of the European Parliament and of the Council of 11
March 1996 on the legal protection of databases, and under
any national implementation thereof, including any amended or
successor version of such directive); and
vii. other similar, equivalent or corresponding rights throughout
the world based on applicable law or treaty, and any national
implementations thereof.
.
2. Waiver. To the greatest extent permitted by, but not in
contravention of, applicable law, Affirmer hereby overtly, fully,
permanently, irrevocably and unconditionally waives, abandons, and
surrenders all of Affirmer's Copyright and Related Rights and
associated claims and causes of action, whether now known or
unknown (including existing as well as future claims and causes of
action), in the Work (i) in all territories worldwide, (ii) for
the maximum duration provided by applicable law or treaty
(including future time extensions), (iii) in any current or future
medium and for any number of copies, and (iv) for any purpose
whatsoever, including without limitation commercial, advertising
or promotional purposes (the "Waiver"). Affirmer makes the Waiver
for the benefit of each member of the public at large and to the
detriment of Affirmer's heirs and successors, fully intending that
such Waiver shall not be subject to revocation, rescission,
cancellation, termination, or any other legal or equitable action
to disrupt the quiet enjoyment of the Work by the public as
contemplated by Affirmer's express Statement of Purpose.
.
3. Public License Fallback. Should any part of the Waiver for any
reason be judged legally invalid or ineffective under applicable law,
then the Waiver shall be preserved to the maximum extent permitted
taking into account Affirmer's express Statement of Purpose. In
addition, to the extent the Waiver is so judged Affirmer hereby
grants to each affected person a royalty-free, non transferable, non
sublicensable, non exclusive, irrevocable and unconditional license
to exercise Affirmer's Copyright and Related Rights in the Work (i)
in all territories worldwide, (ii) for the maximum duration provided
by applicable law or treaty (including future time extensions), (iii)
in any current or future medium and for any number of copies, and
(iv) for any purpose whatsoever, including without limitation
commercial, advertising or promotional purposes (the "License"). The
License shall be deemed effective as of the date CC0 was applied by
Affirmer to the Work. Should any part of the License for any reason
be judged legally invalid or ineffective under applicable law, such
partial invalidity or ineffectiveness shall not invalidate the
remainder of the License, and in such case Affirmer hereby affirms
that he or she will not (i) exercise any of his or her remaining
Copyright and Related Rights in the Work or (ii) assert any
associated claims and causes of action with respect to the Work, in
either case contrary to Affirmer's express Statement of Purpose.
.
4. Limitations and Disclaimers.
.
a. No trademark or patent rights held by Affirmer are waived,
abandoned, surrendered, licensed or otherwise affected by
this document.
b. Affirmer offers the Work as-is and makes no representations
or warranties of any kind concerning the Work, express,
implied, statutory or otherwise, including without limitation
warranties of title, merchantability, fitness for a
particular purpose, non infringement, or the absence of
latent or other defects, accuracy, or the present or absence
of errors, whether or not discoverable, all to the greatest
extent permissible under applicable law.
c. Affirmer disclaims responsibility for clearing rights of
other persons that may apply to the Work or any use thereof,
including without limitation any person's Copyright and
Related Rights in the Work. Further, Affirmer disclaims
responsibility for obtaining any necessary consents,
permissions or other rights required for any use of the
Work.
d. Affirmer understands and acknowledges that Creative Commons
is not a party to this document and has no duty or obligation
with respect to this CC0 or use of the Work.

View File

@ -0,0 +1,25 @@
#!/usr/bin/make -f
export LC_ALL=C.UTF-8
export PYBUILD_NAME = flask-restx
#export PYBUILD_BEFORE_TEST = cp -av README.rst {build_dir}
export PYBUILD_TEST_ARGS = -vv -s
#export PYBUILD_AFTER_TEST = rm -v {build_dir}/README.rst
# ./usr/lib/python3/dist-packages/libarchive/
export PYBUILD_INSTALL_ARGS=--install-lib=/usr/share/opengnsys-modules/python3/dist-packages/
%:
dh $@ --with python3 --buildsystem=pybuild
override_dh_gencontrol:
dh_gencontrol -- \
-Vlib:Depends=$(shell dpkg-query -W -f '$${Depends}' libarchive-dev \
| sed -E 's/.*(libarchive[[:alnum:].-]+).*/\1/')
override_dh_installdocs:
# Nothing, we don't want docs
override_dh_installchangelogs:
# Nothing, we don't want the changelog
#
override_dh_auto_test:
# One test is broken, just disable for now

View File

@ -0,0 +1 @@
3.0 (quilt)

View File

@ -0,0 +1,2 @@
Tests: upstream-tests
Depends: @, python3-mock, python3-pytest

View File

@ -0,0 +1,14 @@
#!/bin/sh
set -e
if ! [ -d "$AUTOPKGTEST_TMP" ]; then
echo "AUTOPKGTEST_TMP not set." >&2
exit 1
fi
cp -rv tests "$AUTOPKGTEST_TMP"
cd "$AUTOPKGTEST_TMP"
mkdir -v libarchive
touch README.rst
py.test-3 tests -vv -l -r a

View File

@ -0,0 +1,177 @@
# Makefile for Sphinx documentation
#
# You can set these variables from the command line.
SPHINXOPTS =
SPHINXBUILD = sphinx-build
PAPER =
BUILDDIR = _build
# User-friendly check for sphinx-build
ifeq ($(shell which $(SPHINXBUILD) >/dev/null 2>&1; echo $$?), 1)
$(error The '$(SPHINXBUILD)' command was not found. Make sure you have Sphinx installed, then set the SPHINXBUILD environment variable to point to the full path of the '$(SPHINXBUILD)' executable. Alternatively you can add the directory with the executable to your PATH. If you don't have Sphinx installed, grab it from https://sphinx-doc.org/)
endif
# Internal variables.
PAPEROPT_a4 = -D latex_paper_size=a4
PAPEROPT_letter = -D latex_paper_size=letter
ALLSPHINXOPTS = -d $(BUILDDIR)/doctrees $(PAPEROPT_$(PAPER)) $(SPHINXOPTS) .
# the i18n builder cannot share the environment and doctrees with the others
I18NSPHINXOPTS = $(PAPEROPT_$(PAPER)) $(SPHINXOPTS) .
.PHONY: help clean html dirhtml singlehtml pickle json htmlhelp qthelp devhelp epub latex latexpdf text man changes linkcheck doctest gettext
help:
@echo "Please use \`make <target>' where <target> is one of"
@echo " html to make standalone HTML files"
@echo " dirhtml to make HTML files named index.html in directories"
@echo " singlehtml to make a single large HTML file"
@echo " pickle to make pickle files"
@echo " json to make JSON files"
@echo " htmlhelp to make HTML files and a HTML help project"
@echo " qthelp to make HTML files and a qthelp project"
@echo " devhelp to make HTML files and a Devhelp project"
@echo " epub to make an epub"
@echo " latex to make LaTeX files, you can set PAPER=a4 or PAPER=letter"
@echo " latexpdf to make LaTeX files and run them through pdflatex"
@echo " latexpdfja to make LaTeX files and run them through platex/dvipdfmx"
@echo " text to make text files"
@echo " man to make manual pages"
@echo " texinfo to make Texinfo files"
@echo " info to make Texinfo files and run them through makeinfo"
@echo " gettext to make PO message catalogs"
@echo " changes to make an overview of all changed/added/deprecated items"
@echo " xml to make Docutils-native XML files"
@echo " pseudoxml to make pseudoxml-XML files for display purposes"
@echo " linkcheck to check all external links for integrity"
@echo " doctest to run all doctests embedded in the documentation (if enabled)"
clean:
rm -rf $(BUILDDIR)/*
html:
$(SPHINXBUILD) -b html $(ALLSPHINXOPTS) $(BUILDDIR)/html
@echo
@echo "Build finished. The HTML pages are in $(BUILDDIR)/html."
dirhtml:
$(SPHINXBUILD) -b dirhtml $(ALLSPHINXOPTS) $(BUILDDIR)/dirhtml
@echo
@echo "Build finished. The HTML pages are in $(BUILDDIR)/dirhtml."
singlehtml:
$(SPHINXBUILD) -b singlehtml $(ALLSPHINXOPTS) $(BUILDDIR)/singlehtml
@echo
@echo "Build finished. The HTML page is in $(BUILDDIR)/singlehtml."
pickle:
$(SPHINXBUILD) -b pickle $(ALLSPHINXOPTS) $(BUILDDIR)/pickle
@echo
@echo "Build finished; now you can process the pickle files."
json:
$(SPHINXBUILD) -b json $(ALLSPHINXOPTS) $(BUILDDIR)/json
@echo
@echo "Build finished; now you can process the JSON files."
htmlhelp:
$(SPHINXBUILD) -b htmlhelp $(ALLSPHINXOPTS) $(BUILDDIR)/htmlhelp
@echo
@echo "Build finished; now you can run HTML Help Workshop with the" \
".hhp project file in $(BUILDDIR)/htmlhelp."
qthelp:
$(SPHINXBUILD) -b qthelp $(ALLSPHINXOPTS) $(BUILDDIR)/qthelp
@echo
@echo "Build finished; now you can run "qcollectiongenerator" with the" \
".qhcp project file in $(BUILDDIR)/qthelp, like this:"
@echo "# qcollectiongenerator $(BUILDDIR)/qthelp/Flask-RESTX.qhcp"
@echo "To view the help file:"
@echo "# assistant -collectionFile $(BUILDDIR)/qthelp/Flask-RESTX.qhc"
devhelp:
$(SPHINXBUILD) -b devhelp $(ALLSPHINXOPTS) $(BUILDDIR)/devhelp
@echo
@echo "Build finished."
@echo "To view the help file:"
@echo "# mkdir -p $$HOME/.local/share/devhelp/Flask-RESTX"
@echo "# ln -s $(BUILDDIR)/devhelp $$HOME/.local/share/devhelp/Flask-RESTX"
@echo "# devhelp"
epub:
$(SPHINXBUILD) -b epub $(ALLSPHINXOPTS) $(BUILDDIR)/epub
@echo
@echo "Build finished. The epub file is in $(BUILDDIR)/epub."
latex:
$(SPHINXBUILD) -b latex $(ALLSPHINXOPTS) $(BUILDDIR)/latex
@echo
@echo "Build finished; the LaTeX files are in $(BUILDDIR)/latex."
@echo "Run \`make' in that directory to run these through (pdf)latex" \
"(use \`make latexpdf' here to do that automatically)."
latexpdf:
$(SPHINXBUILD) -b latex $(ALLSPHINXOPTS) $(BUILDDIR)/latex
@echo "Running LaTeX files through pdflatex..."
$(MAKE) -C $(BUILDDIR)/latex all-pdf
@echo "pdflatex finished; the PDF files are in $(BUILDDIR)/latex."
latexpdfja:
$(SPHINXBUILD) -b latex $(ALLSPHINXOPTS) $(BUILDDIR)/latex
@echo "Running LaTeX files through platex and dvipdfmx..."
$(MAKE) -C $(BUILDDIR)/latex all-pdf-ja
@echo "pdflatex finished; the PDF files are in $(BUILDDIR)/latex."
text:
$(SPHINXBUILD) -b text $(ALLSPHINXOPTS) $(BUILDDIR)/text
@echo
@echo "Build finished. The text files are in $(BUILDDIR)/text."
man:
$(SPHINXBUILD) -b man $(ALLSPHINXOPTS) $(BUILDDIR)/man
@echo
@echo "Build finished. The manual pages are in $(BUILDDIR)/man."
texinfo:
$(SPHINXBUILD) -b texinfo $(ALLSPHINXOPTS) $(BUILDDIR)/texinfo
@echo
@echo "Build finished. The Texinfo files are in $(BUILDDIR)/texinfo."
@echo "Run \`make' in that directory to run these through makeinfo" \
"(use \`make info' here to do that automatically)."
info:
$(SPHINXBUILD) -b texinfo $(ALLSPHINXOPTS) $(BUILDDIR)/texinfo
@echo "Running Texinfo files through makeinfo..."
make -C $(BUILDDIR)/texinfo info
@echo "makeinfo finished; the Info files are in $(BUILDDIR)/texinfo."
gettext:
$(SPHINXBUILD) -b gettext $(I18NSPHINXOPTS) $(BUILDDIR)/locale
@echo
@echo "Build finished. The message catalogs are in $(BUILDDIR)/locale."
changes:
$(SPHINXBUILD) -b changes $(ALLSPHINXOPTS) $(BUILDDIR)/changes
@echo
@echo "The overview file is in $(BUILDDIR)/changes."
linkcheck:
$(SPHINXBUILD) -b linkcheck $(ALLSPHINXOPTS) $(BUILDDIR)/linkcheck
@echo
@echo "Link check complete; look for any errors in the above output " \
"or in $(BUILDDIR)/linkcheck/output.txt."
doctest:
$(SPHINXBUILD) -b doctest $(ALLSPHINXOPTS) $(BUILDDIR)/doctest
@echo "Testing of doctests in the sources finished, look at the " \
"results in $(BUILDDIR)/doctest/output.txt."
xml:
$(SPHINXBUILD) -b xml $(ALLSPHINXOPTS) $(BUILDDIR)/xml
@echo
@echo "Build finished. The XML files are in $(BUILDDIR)/xml."
pseudoxml:
$(SPHINXBUILD) -b pseudoxml $(ALLSPHINXOPTS) $(BUILDDIR)/pseudoxml
@echo
@echo "Build finished. The pseudo-XML files are in $(BUILDDIR)/pseudoxml."

Binary file not shown.

After

Width:  |  Height:  |  Size: 12 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 7.4 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 9.9 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 23 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 3.1 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 17 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 42 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 41 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 128 KiB

View File

@ -0,0 +1,7 @@
<!--h3>Links</h3-->
{% if theme_badges %}
<hr class="badges" />
{% for badge, target, alt in theme_badges %}
<p class="badge"><a href="{{target}}"><img src="{{badge}}" alt="{{alt}}" /></a></p>
{% endfor %}
{% endif %}

View File

@ -0,0 +1,10 @@
{% extends "alabaster/layout.html" %}
{%- block extrahead %}
{% if theme_favicons %}
{% for size, file in theme_favicons.items() %}
<link rel="icon" type="image/png" href="{{ pathto('_static/' ~ file, 1) }}" sizes="{{size}}x{{size}}">
{% endfor %}
{% endif %}
{{ super() }}
{% endblock %}

View File

@ -0,0 +1,12 @@
@import url("alabaster.css");
.sphinxsidebar p.badge a {
border: none;
}
.sphinxsidebar hr.badges {
border: 0;
border-bottom: 1px dashed #aaa;
background: none;
/*width: 100%;*/
}

View File

@ -0,0 +1,7 @@
[theme]
inherit = alabaster
stylesheet = restx.css
[options]
favicons=
badges=

View File

@ -0,0 +1,98 @@
.. _api:
API
===
.. currentmodule:: flask_restx
Core
----
.. autoclass:: Api
:members:
:inherited-members:
.. autoclass:: Namespace
:members:
.. autoclass:: Resource
:members:
:inherited-members:
Models
------
.. autoclass:: flask_restx.Model
:members:
All fields accept a ``required`` boolean and a ``description`` string in ``kwargs``.
.. automodule:: flask_restx.fields
:members:
Serialization
-------------
.. currentmodule:: flask_restx
.. autofunction:: marshal
.. autofunction:: marshal_with
.. autofunction:: marshal_with_field
.. autoclass:: flask_restx.mask.Mask
:members:
.. autofunction:: flask_restx.mask.apply
Request parsing
---------------
.. automodule:: flask_restx.reqparse
:members:
Inputs
~~~~~~
.. automodule:: flask_restx.inputs
:members:
Errors
------
.. automodule:: flask_restx.errors
:members:
.. autoexception:: flask_restx.fields.MarshallingError
.. autoexception:: flask_restx.mask.MaskError
.. autoexception:: flask_restx.mask.ParseError
Schemas
-------
.. automodule:: flask_restx.schemas
:members:
Internals
---------
These are internal classes or helpers.
Most of the time you shouldn't have to deal directly with them.
.. autoclass:: flask_restx.api.SwaggerView
.. autoclass:: flask_restx.swagger.Swagger
.. autoclass:: flask_restx.postman.PostmanCollectionV1
.. automodule:: flask_restx.utils
:members:

View File

@ -0,0 +1,342 @@
# -*- coding: utf-8 -*-
#
# Flask-RESTX documentation build configuration file, created by
# sphinx-quickstart on Wed Aug 13 17:07:14 2014.
#
# This file is execfile()d with the current directory set to its
# containing dir.
#
# Note that not all possible configuration values are present in this
# autogenerated file.
#
# All configuration values have a default; values that are commented out
# serve to show the default.
import os
import sys
import alabaster
# If extensions (or modules to document with autodoc) are in another directory,
# add these directories to sys.path here. If the directory is relative to the
# documentation root, use os.path.abspath to make it absolute, like shown here.
sys.path.insert(0, os.path.abspath(".."))
# -- General configuration ------------------------------------------------
# If your documentation needs a minimal Sphinx version, state it here.
# needs_sphinx = '1.0'
# Add any Sphinx extension module names here, as strings. They can be
# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom
# ones.
extensions = [
"sphinx.ext.autodoc",
"sphinx.ext.viewcode",
"sphinx.ext.intersphinx",
"sphinx.ext.todo",
"sphinx_issues",
"alabaster",
]
# Add any paths that contain templates here, relative to this directory.
templates_path = ["_templates"]
# The suffix of source filenames.
source_suffix = ".rst"
# The encoding of source files.
# source_encoding = 'utf-8-sig'
# The master toctree document.
master_doc = "index"
# General information about the project.
project = "Flask-RESTX"
copyright = "2020, python-restx Authors"
# The version info for the project you're documenting, acts as replacement for
# |version| and |release|, also used in various other places throughout the
# built documents.
#
# The full version, including alpha/beta/rc tags.
release = __import__("flask_restx").__version__
# The short X.Y version.
version = ".".join(release.split(".")[:1])
# Github repo
issues_github_path = "python-restx/flask-restx"
# The language for content autogenerated by Sphinx. Refer to documentation
# for a list of supported languages.
# language = None
# There are two options for replacing |today|: either, you set today to some
# non-false value, then it is used:
# today = ''
# Else, today_fmt is used as the format for a strftime call.
# today_fmt = '%B %d, %Y'
# List of patterns, relative to source directory, that match files and
# directories to ignore when looking for source files.
exclude_patterns = ["_build"]
# The reST default role (used for this markup: `text`) to use for all
# documents.
# default_role = None
# If true, '()' will be appended to :func: etc. cross-reference text.
# add_function_parentheses = True
# If true, the current module name will be prepended to all description
# unit titles (such as .. function::).
# add_module_names = True
# If true, sectionauthor and moduleauthor directives will be shown in the
# output. They are ignored by default.
# show_authors = False
# The name of the Pygments (syntax highlighting) style to use.
pygments_style = "sphinx"
# A list of ignored prefixes for module index sorting.
# modindex_common_prefix = []
# If true, keep warnings as "system message" paragraphs in the built documents.
# keep_warnings = False
# -- Options for HTML output ----------------------------------------------
# The theme to use for HTML and HTML Help pages. See the documentation for
# a list of builtin themes.
html_theme = "restx"
# Theme options are theme-specific and customize the look and feel of a theme
# further. For a list of options available for each theme, see the
# documentation.
html_theme_options = {
"logo": "logo-512.png",
"logo_name": True,
"touch_icon": "apple-180.png",
"github_user": "python-restx",
"github_repo": "flask-restx",
"github_banner": True,
"show_related": True,
"page_width": "1000px",
"sidebar_width": "260px",
"favicons": {
64: "favicon-64.png",
128: "favicon-128.png",
196: "favicon-196.png",
},
"badges": [
(
# Gitter.im
"https://badges.gitter.im/Join%20Chat.svg",
"https://gitter.im/python-restx",
"Join the chat at https://gitter.im/python-restx",
),
(
# Github Fork
"https://img.shields.io/github/forks/python-restx/flask-restx.svg?style=social&label=Fork",
"https://github.com/python-restx/flask-restx",
"Github repository",
),
(
# Github issues
"https://img.shields.io/github/issues-raw/python-restx/flask-restx.svg",
"https://github.com/python-restx/flask-restx/issues",
"Github repository",
),
(
# License
"https://img.shields.io/github/license/python-restx/flask-restx.svg",
"https://github.com/python-restx/flask-restx",
"License",
),
(
# PyPI
"https://img.shields.io/pypi/v/flask-restx.svg",
"https://pypi.python.org/pypi/flask-restx",
"Latest version on PyPI",
),
],
}
# Add any paths that contain custom themes here, relative to this directory.
html_theme_path = [alabaster.get_path(), "_themes"]
html_context = {}
# The name for this set of Sphinx documents. If None, it defaults to
# "<project> v<release> documentation".
# html_title = None
# A shorter title for the navigation bar. Default is the same as html_title.
# html_short_title = None
# The name of an image file (relative to this directory) to place at the top
# of the sidebar.
# html_logo = None
# The name of an image file (within the static path) to use as favicon of the
# docs. This file should be a Windows icon file (.ico) being 16x16 or 32x32
# pixels large.
html_favicon = "_static/favicon.ico"
# Add any paths that contain custom static files (such as style sheets) here,
# relative to this directory. They are copied after the builtin static files,
# so a file named "default.css" will overwrite the builtin "default.css".
html_static_path = ["_static"]
# Add any extra paths that contain custom files (such as robots.txt or
# .htaccess) here, relative to this directory. These files are copied
# directly to the root of the documentation.
# html_extra_path = []
# If not '', a 'Last updated on:' timestamp is inserted at every page bottom,
# using the given strftime format.
# html_last_updated_fmt = '%b %d, %Y'
# If true, SmartyPants will be used to convert quotes and dashes to
# typographically correct entities.
# html_use_smartypants = True
# Custom sidebar templates, maps document names to template names.
html_sidebars = {
"**": [
"about.html",
"navigation.html",
"relations.html",
"searchbox.html",
"donate.html",
"badges.html",
]
}
# Additional templates that should be rendered to pages, maps page names to
# template names.
# html_additional_pages = {}
# If false, no module index is generated.
# html_domain_indices = True
# If false, no index is generated.
# html_use_index = True
# If true, the index is split into individual pages for each letter.
# html_split_index = False
# If true, links to the reST sources are added to the pages.
# html_show_sourcelink = True
# If true, "Created using Sphinx" is shown in the HTML footer. Default is True.
# html_show_sphinx = True
# If true, "(C) Copyright ..." is shown in the HTML footer. Default is True.
# html_show_copyright = True
# If true, an OpenSearch description file will be output, and all pages will
# contain a <link> tag referring to it. The value of this option must be the
# base URL from which the finished HTML is served.
# html_use_opensearch = ''
# This is the file name suffix for HTML files (e.g. ".xhtml").
# html_file_suffix = None
# Output file base name for HTML help builder.
htmlhelp_basename = "Flask-RESTXdoc"
# -- Options for LaTeX output ---------------------------------------------
latex_elements = {
# The paper size ('letterpaper' or 'a4paper').
#'papersize': 'letterpaper',
# The font size ('10pt', '11pt' or '12pt').
#'pointsize': '10pt',
# Additional stuff for the LaTeX preamble.
#'preamble': '',
}
# Grouping the document tree into LaTeX files. List of tuples
# (source start file, target name, title,
# author, documentclass [howto, manual, or own class]).
latex_documents = [
(
"index",
"Flask-RESTX.tex",
"Flask-RESTX Documentation",
"python-restx Authors",
"manual",
),
]
# The name of an image file (relative to this directory) to place at the top of
# the title page.
# latex_logo = None
# For "manual" documents, if this is true, then toplevel headings are parts,
# not chapters.
# latex_use_parts = False
# If true, show page references after internal links.
# latex_show_pagerefs = False
# If true, show URL addresses after external links.
# latex_show_urls = False
# Documents to append as an appendix to all manuals.
# latex_appendices = []
# If false, no module index is generated.
# latex_domain_indices = True
# -- Options for manual page output ---------------------------------------
# One entry per manual page. List of tuples
# (source start file, name, description, authors, manual section).
man_pages = [
("index", "flask-restx", "Flask-RESTX Documentation", ["python-restx Authors"], 1)
]
# If true, show URL addresses after external links.
# man_show_urls = False
# -- Options for Texinfo output -------------------------------------------
# Grouping the document tree into Texinfo files. List of tuples
# (source start file, target name, title, author,
# dir menu entry, description, category)
texinfo_documents = [
(
"index",
"Flask-RESTX",
"Flask-RESTX Documentation",
"python-restx Authors",
"Flask-RESTX",
"One line description of project.",
"Miscellaneous",
),
]
# Documents to append as an appendix to all manuals.
# texinfo_appendices = []
# If false, no module index is generated.
# texinfo_domain_indices = True
# How to display URL addresses: 'footnote', 'no', or 'inline'.
# texinfo_show_urls = 'footnote'
# If true, do not generate a @detailmenu in the "Top" node's menu.
# texinfo_no_detailmenu = False
intersphinx_mapping = {
"flask": ("https://flask.palletsprojects.com/", None),
"python": ("https://docs.python.org/", None),
"werkzeug": ("https://werkzeug.palletsprojects.com/", None),
}

Some files were not shown because too many files have changed in this diff Show More