Compare commits

...

47 Commits

Author SHA1 Message Date
Giò Diani 99a112df24 longer timeouts 2025-01-11 21:54:26 +01:00
Giò Diani 2013d2b440 Wiederinstandsetzung Heatmap @stoffelmauro musste dazu etwas in der API anpassen. 2025-01-11 20:52:02 +01:00
mmaurostoffel 67382003ca closes #7: etl_region_capacities erstellt
!! Wie im Issue beschrieben wurde etl_region_capacities zu etl_region_properties_capacities angepasst und die Endpoints ebenfalls.!!

!!Die Abfrage der globalen Daten ist implementiert und funktioniert, braucht aber recht lange!!
2025-01-11 17:33:50 +01:00
Giò Diani 774e30c945 fix daily chart 2025-01-09 18:49:09 +01:00
Giò Diani 3b935a4d20 Nachbaren in Popup 2025-01-09 18:34:20 +01:00
mmaurostoffel 638d835d3b closes #5
Test to try out the closing feature
2025-01-09 18:26:01 +01:00
mmaurostoffel cb6935b60c updated the output of the etl_property_neighbour.py
closes issue #5
2025-01-09 18:22:55 +01:00
mmaurostoffel 60a3d7d9b3 Little fixes for weekdays
#6
2025-01-09 17:40:58 +01:00
mmaurostoffel 65b63d1326 Sortierung für etl_property_capacities_monthly eingefügt 2025-01-09 15:59:16 +01:00
mmaurostoffel a6cbe3bc29 update des etl_capacities_weekdays.py 2025-01-09 15:07:38 +01:00
mmaurostoffel 2508b34ceb bugfix: falscher name 2025-01-09 14:49:03 +01:00
mmaurostoffel cc71cbba2d Endpoint für etl_property_neighbours hinzugefügt 2025-01-09 14:48:33 +01:00
Giò Diani 258f1e4df6 Anzeige Auslastung p. Monat bei Properties im Dashboard. 2025-01-08 22:02:33 +01:00
mmaurostoffel 7884febe53 issue 5 resolved
#3
Ausgabeforma:
{ids: [84, 43...44], lat:[...], lon[...]}
2025-01-07 20:20:48 +01:00
mmaurostoffel 42dc14021f etl_Property_capacities_weekdays.py eingefügt
Abfragemöglichkeit für die Wochentage eingefügt
2025-01-06 19:42:49 +01:00
Giò Diani f5a2b16721 Fehlende Anführungszeichen hinzugefügt resolves #3 2025-01-05 21:45:23 +01:00
mmaurostoffel d9cae3d0ab Issue 3 fast fertig
#3

Der Issue ist soweit bereit, es gibt aber noch das Problem, dass das ScrapeDate nicht als Datum sondern asl Integer interpretiert wird im database.py. Deshalb ist es im Moment als konstante implementiert
2025-01-05 21:23:10 +01:00
mmaurostoffel 8bcc1c57b5 Gitea Issue 1 Beispiel 2
#1
etl_region_capacities_comparison eingefügt
2025-01-05 17:25:29 +01:00
mmaurostoffel 03e78a4105 Issue 1 Beispiel 1 resolved
#1

Globale region capacities eingefügt: Vorsicht! lange Ladezeit
2025-01-05 16:12:16 +01:00
mmaurostoffel 2a9ef9d991 Merge branch 'main' of https://gitea.fhgr.ch/stoffelmauro/ConsultancyProject_2_ETL 2025-01-05 15:51:27 +01:00
mmaurostoffel 8fcaf2a6f7 Gitea Issue 2 resolved
#2

etl_region_capacities.py: neues Output Format = [datum, prop_id, capacity]
2025-01-05 15:51:19 +01:00
Giò Diani 8655255782 info button, dashboard startseite 2025-01-05 13:26:51 +01:00
mmaurostoffel 281d9d3f5a Merge branch 'main' of https://gitea.fhgr.ch/stoffelmauro/ConsultancyProject_2_ETL 2025-01-05 13:19:45 +01:00
mmaurostoffel c68e6f54bd cleanup commit 2025-01-05 13:19:43 +01:00
Giò Diani 32d162c7c5 Überarbeitung Detailansicht. 2025-01-04 18:16:12 +01:00
Giò Diani 466d3168c4 Added caching (adjust setting in .env accordingly; CACHE_STORE=file) 2025-01-03 16:44:17 +01:00
Giò Diani 5a2cc96a95 Achsenbeschrftungen, Farben 2025-01-03 16:25:30 +01:00
Giò Diani 640a5b2f9e Implement region capacity as test 2024-12-20 21:46:54 +01:00
Giò Diani f585a7a2aa fix fehlender import 2024-12-20 21:36:47 +01:00
mmaurostoffel 818d6fb5ec Merge branch 'main' of https://gitea.fhgr.ch/stoffelmauro/ConsultancyProject_2_ETL 2024-12-20 20:57:20 +01:00
mmaurostoffel a8b856b714 etl_region_capacities erstellt + database und api/main Anpassungen dafür 2024-12-20 20:57:10 +01:00
Giò Diani 0aa0f2345c documentation folder, c4 diagram 2024-12-20 17:38:08 +01:00
Giò Diani eb362d78ad Vorbereitung heatmap 2024-12-20 15:25:33 +01:00
Giò Diani 5f61911a69 More sensible default in env, more documentation how to install and run. 2024-12-20 15:03:22 +01:00
Giò Diani 66d048c70e Enhance coordinated view of property. 2024-12-20 12:20:49 +01:00
Giò Diani 63590d69ab fix wrong import path. 2024-12-20 10:56:10 +01:00
Giò Diani 47a5035787 dashboard einzelansicht trend auslastung. 2024-12-19 19:22:09 +01:00
mmaurostoffel 4b7067fb63 Merge branch 'main' of https://gitea.fhgr.ch/stoffelmauro/ConsultancyProject_2_ETL 2024-12-19 18:11:17 +01:00
mmaurostoffel eba2f0a265 Data Quality updated to include Regions and more information 2024-12-19 18:11:15 +01:00
Giò Diani ce46655003 Add some more description. 2024-12-18 20:10:11 +01:00
Giò Diani 233f3c475a Added README 2024-12-18 19:55:42 +01:00
Giò Diani a8543d619f Further dashboard development. 2024-12-18 19:52:06 +01:00
Giò Diani 1574edea88 First steps Dashboard. 2024-12-18 15:14:13 +01:00
mmaurostoffel a03ce3d647 Änderungen von bevor Monorepo übernommen 2024-12-18 15:11:23 +01:00
Giò Diani f4a927e125 refactor to monorepo, install laravel. 2024-12-18 10:14:56 +01:00
mmaurostoffel 125250a665 data_quality.py erstellt zur Visualisierung der Datenqualität 2024-12-11 01:01:52 +01:00
mmaurostoffel 338d3e9cc2 Untersuchung Vorbuchungszeit abgeschlossen 2024-11-28 16:10:53 +01:00
1442 changed files with 74449 additions and 236 deletions

2
.gitignore vendored
View File

@ -23,6 +23,7 @@
*.ipr
.idea/
# eclipse project file
.settings/
.classpath
@ -65,3 +66,4 @@ env3.*/
# duckdb
*.duckdb
/src/mauro/dok/

6
README.md Normal file
View File

@ -0,0 +1,6 @@
# Consultancy 2
## Projektstruktur
- etl: Enthält den Programmcode, welcher die Daten aufbereitet und via REST-API zur Verfügung stellt.
- dashboard: Webapplikation zur Exploration und Visualisierung der Daten.

18
dashboard/.editorconfig Normal file
View File

@ -0,0 +1,18 @@
root = true
[*]
charset = utf-8
end_of_line = lf
indent_size = 4
indent_style = space
insert_final_newline = true
trim_trailing_whitespace = true
[*.md]
trim_trailing_whitespace = false
[*.{yml,yaml}]
indent_size = 2
[docker-compose.yml]
indent_size = 4

68
dashboard/.env.example Normal file
View File

@ -0,0 +1,68 @@
APP_NAME=Laravel
APP_ENV=local
APP_KEY=
APP_DEBUG=true
APP_TIMEZONE=UTC
APP_URL=http://localhost
APP_LOCALE=en
APP_FALLBACK_LOCALE=en
APP_FAKER_LOCALE=en_US
APP_MAINTENANCE_DRIVER=file
# APP_MAINTENANCE_STORE=database
PHP_CLI_SERVER_WORKERS=4
BCRYPT_ROUNDS=12
LOG_CHANNEL=stack
LOG_STACK=single
LOG_DEPRECATIONS_CHANNEL=null
LOG_LEVEL=debug
# DB_CONNECTION=sqlite
# DB_HOST=127.0.0.1
# DB_PORT=3306
# DB_DATABASE=laravel
# DB_USERNAME=root
# DB_PASSWORD=
SESSION_DRIVER=file
SESSION_LIFETIME=120
SESSION_ENCRYPT=false
SESSION_PATH=/
SESSION_DOMAIN=null
BROADCAST_CONNECTION=log
FILESYSTEM_DISK=local
QUEUE_CONNECTION=database
CACHE_STORE=file
CACHE_PREFIX=
MEMCACHED_HOST=127.0.0.1
REDIS_CLIENT=phpredis
REDIS_HOST=127.0.0.1
REDIS_PASSWORD=null
REDIS_PORT=6379
MAIL_MAILER=log
MAIL_SCHEME=null
MAIL_HOST=127.0.0.1
MAIL_PORT=2525
MAIL_USERNAME=null
MAIL_PASSWORD=null
MAIL_FROM_ADDRESS="hello@example.com"
MAIL_FROM_NAME="${APP_NAME}"
AWS_ACCESS_KEY_ID=
AWS_SECRET_ACCESS_KEY=
AWS_DEFAULT_REGION=us-east-1
AWS_BUCKET=
AWS_USE_PATH_STYLE_ENDPOINT=false
VITE_APP_NAME="${APP_NAME}"
FASTAPI_URI=http://localhost:8080

11
dashboard/.gitattributes vendored Normal file
View File

@ -0,0 +1,11 @@
* text=auto eol=lf
*.blade.php diff=html
*.css diff=css
*.html diff=html
*.md diff=markdown
*.php diff=php
/.github export-ignore
CHANGELOG.md export-ignore
.styleci.yml export-ignore

23
dashboard/.gitignore vendored Normal file
View File

@ -0,0 +1,23 @@
/.phpunit.cache
/node_modules
/public/build
/public/hot
/public/storage
/storage/*.key
/storage/pail
/vendor
.env
.env.backup
.env.production
.phpactor.json
.phpunit.result.cache
Homestead.json
Homestead.yaml
auth.json
npm-debug.log
yarn-error.log
/.fleet
/.idea
/.nova
/.vscode
/.zed

16
dashboard/README.md Normal file
View File

@ -0,0 +1,16 @@
# Install
## Prerequisites
- In order to run this project please install all required software according to the laravel documentation: https://laravel.com/docs/11.x#installing-php
## Configuration & installation
- Make a copy of the .env.example to .env
- Run the following commands:
```bash
composer install && php artisan key:generate && npm i
```
# Run server
```bash
composer run dev
```

93
dashboard/app/Api.php Normal file
View File

@ -0,0 +1,93 @@
<?php
namespace App;
use Illuminate\Support\Facades\Cache;
use Illuminate\Support\Facades\Http;
class Api
{
public function __construct()
{
}
public static function get(string $path, string $query = ''): ?array
{
$endpoint = env('FASTAPI_URI');
$request = $endpoint.$path;
if (Cache::has($request)) {
return Cache::get($request);
}
$get = Http::timeout(800)->get($request);
if($get->successful()){
$result = $get->json();
Cache::put($request, $result);
return $result;
}
return null;
}
public static function propertiesPerRegion()
{
return self::get('/region/properties');
}
public static function propertiesGrowth()
{
return self::get('/properties/growth');
}
public static function propertiesGeo()
{
return self::get('/properties/geo');
}
public static function propertyExtractions(int $id)
{
return self::get("/property/{$id}/extractions");
}
public static function propertyCapacities(int $id)
{
return self::get("/property/{$id}/capacities");
}
public static function propertyBase(int $id): mixed
{
return self::get("/property/{$id}/base");
}
public static function regionPropertyCapacities(int $id): mixed
{
return self::get("/region/{$id}/properties/capacities");
}
public static function propertyCapacitiesMonthly(int $id, string $date): mixed
{
return self::get("/property/{$id}/capacities/monthly/{$date}");
}
public static function propertyCapacitiesDaily(int $id, string $date): mixed
{
return self::get("/property/{$id}/capacities/weekdays/{$date}");
}
public static function propertyNeighbours(int $id): mixed
{
return self::get("/property/{$id}/neighbours");
}
public static function regionCapacities(int $id): mixed
{
return self::get("/region/{$id}/capacities");
}
}

View File

@ -0,0 +1,8 @@
<?php
namespace App\Http\Controllers;
abstract class Controller
{
//
}

View File

@ -0,0 +1,48 @@
<?php
namespace App\Models;
// use Illuminate\Contracts\Auth\MustVerifyEmail;
use Illuminate\Database\Eloquent\Factories\HasFactory;
use Illuminate\Foundation\Auth\User as Authenticatable;
use Illuminate\Notifications\Notifiable;
class User extends Authenticatable
{
/** @use HasFactory<\Database\Factories\UserFactory> */
use HasFactory, Notifiable;
/**
* The attributes that are mass assignable.
*
* @var list<string>
*/
protected $fillable = [
'name',
'email',
'password',
];
/**
* The attributes that should be hidden for serialization.
*
* @var list<string>
*/
protected $hidden = [
'password',
'remember_token',
];
/**
* Get the attributes that should be cast.
*
* @return array<string, string>
*/
protected function casts(): array
{
return [
'email_verified_at' => 'datetime',
'password' => 'hashed',
];
}
}

View File

@ -0,0 +1,24 @@
<?php
namespace App\Providers;
use Illuminate\Support\ServiceProvider;
class AppServiceProvider extends ServiceProvider
{
/**
* Register any application services.
*/
public function register(): void
{
//
}
/**
* Bootstrap any application services.
*/
public function boot(): void
{
//
}
}

15
dashboard/artisan Executable file
View File

@ -0,0 +1,15 @@
#!/usr/bin/env php
<?php
use Symfony\Component\Console\Input\ArgvInput;
define('LARAVEL_START', microtime(true));
// Register the Composer autoloader...
require __DIR__.'/vendor/autoload.php';
// Bootstrap Laravel and handle the command...
$status = (require_once __DIR__.'/bootstrap/app.php')
->handleCommand(new ArgvInput);
exit($status);

View File

@ -0,0 +1,18 @@
<?php
use Illuminate\Foundation\Application;
use Illuminate\Foundation\Configuration\Exceptions;
use Illuminate\Foundation\Configuration\Middleware;
return Application::configure(basePath: dirname(__DIR__))
->withRouting(
web: __DIR__.'/../routes/web.php',
commands: __DIR__.'/../routes/console.php',
health: '/up',
)
->withMiddleware(function (Middleware $middleware) {
//
})
->withExceptions(function (Exceptions $exceptions) {
//
})->create();

2
dashboard/bootstrap/cache/.gitignore vendored Normal file
View File

@ -0,0 +1,2 @@
*
!.gitignore

View File

@ -0,0 +1,5 @@
<?php
return [
App\Providers\AppServiceProvider::class,
];

74
dashboard/composer.json Normal file
View File

@ -0,0 +1,74 @@
{
"$schema": "https://getcomposer.org/schema.json",
"name": "laravel/laravel",
"type": "project",
"description": "The skeleton application for the Laravel framework.",
"keywords": [
"laravel",
"framework"
],
"license": "MIT",
"require": {
"php": "^8.2",
"laravel/framework": "^11.31",
"laravel/tinker": "^2.9"
},
"require-dev": {
"fakerphp/faker": "^1.23",
"laravel/pail": "^1.1",
"laravel/pint": "^1.13",
"laravel/sail": "^1.26",
"mockery/mockery": "^1.6",
"nunomaduro/collision": "^8.1",
"phpunit/phpunit": "^11.0.1"
},
"autoload": {
"psr-4": {
"App\\": "app/",
"Database\\Factories\\": "database/factories/",
"Database\\Seeders\\": "database/seeders/"
}
},
"autoload-dev": {
"psr-4": {
"Tests\\": "tests/"
}
},
"scripts": {
"post-autoload-dump": [
"Illuminate\\Foundation\\ComposerScripts::postAutoloadDump",
"@php artisan package:discover --ansi"
],
"post-update-cmd": [
"@php artisan vendor:publish --tag=laravel-assets --ansi --force"
],
"post-root-package-install": [
"@php -r \"file_exists('.env') || copy('.env.example', '.env');\""
],
"post-create-project-cmd": [
"@php artisan key:generate --ansi",
"@php -r \"file_exists('database/database.sqlite') || touch('database/database.sqlite');\"",
"@php artisan migrate --graceful --ansi"
],
"dev": [
"Composer\\Config::disableProcessTimeout",
"npx concurrently -c \"#93c5fd,#c4b5fd,#fb7185,#fdba74\" \"php artisan serve\" \"php artisan queue:listen --tries=1\" \"php artisan pail --timeout=0\" \"npm run dev\" --names=server,queue,logs,vite"
]
},
"extra": {
"laravel": {
"dont-discover": []
}
},
"config": {
"optimize-autoloader": true,
"preferred-install": "dist",
"sort-packages": true,
"allow-plugins": {
"pestphp/pest-plugin": true,
"php-http/discovery": true
}
},
"minimum-stability": "stable",
"prefer-stable": true
}

8083
dashboard/composer.lock generated Normal file

File diff suppressed because it is too large Load Diff

126
dashboard/config/app.php Normal file
View File

@ -0,0 +1,126 @@
<?php
return [
/*
|--------------------------------------------------------------------------
| Application Name
|--------------------------------------------------------------------------
|
| This value is the name of your application, which will be used when the
| framework needs to place the application's name in a notification or
| other UI elements where an application name needs to be displayed.
|
*/
'name' => env('APP_NAME', 'Laravel'),
/*
|--------------------------------------------------------------------------
| Application Environment
|--------------------------------------------------------------------------
|
| This value determines the "environment" your application is currently
| running in. This may determine how you prefer to configure various
| services the application utilizes. Set this in your ".env" file.
|
*/
'env' => env('APP_ENV', 'production'),
/*
|--------------------------------------------------------------------------
| Application Debug Mode
|--------------------------------------------------------------------------
|
| When your application is in debug mode, detailed error messages with
| stack traces will be shown on every error that occurs within your
| application. If disabled, a simple generic error page is shown.
|
*/
'debug' => (bool) env('APP_DEBUG', false),
/*
|--------------------------------------------------------------------------
| Application URL
|--------------------------------------------------------------------------
|
| This URL is used by the console to properly generate URLs when using
| the Artisan command line tool. You should set this to the root of
| the application so that it's available within Artisan commands.
|
*/
'url' => env('APP_URL', 'http://localhost'),
/*
|--------------------------------------------------------------------------
| Application Timezone
|--------------------------------------------------------------------------
|
| Here you may specify the default timezone for your application, which
| will be used by the PHP date and date-time functions. The timezone
| is set to "UTC" by default as it is suitable for most use cases.
|
*/
'timezone' => env('APP_TIMEZONE', 'UTC'),
/*
|--------------------------------------------------------------------------
| Application Locale Configuration
|--------------------------------------------------------------------------
|
| The application locale determines the default locale that will be used
| by Laravel's translation / localization methods. This option can be
| set to any locale for which you plan to have translation strings.
|
*/
'locale' => env('APP_LOCALE', 'en'),
'fallback_locale' => env('APP_FALLBACK_LOCALE', 'en'),
'faker_locale' => env('APP_FAKER_LOCALE', 'en_US'),
/*
|--------------------------------------------------------------------------
| Encryption Key
|--------------------------------------------------------------------------
|
| This key is utilized by Laravel's encryption services and should be set
| to a random, 32 character string to ensure that all encrypted values
| are secure. You should do this prior to deploying the application.
|
*/
'cipher' => 'AES-256-CBC',
'key' => env('APP_KEY'),
'previous_keys' => [
...array_filter(
explode(',', env('APP_PREVIOUS_KEYS', ''))
),
],
/*
|--------------------------------------------------------------------------
| Maintenance Mode Driver
|--------------------------------------------------------------------------
|
| These configuration options determine the driver used to determine and
| manage Laravel's "maintenance mode" status. The "cache" driver will
| allow maintenance mode to be controlled across multiple machines.
|
| Supported drivers: "file", "cache"
|
*/
'maintenance' => [
'driver' => env('APP_MAINTENANCE_DRIVER', 'file'),
'store' => env('APP_MAINTENANCE_STORE', 'database'),
],
];

115
dashboard/config/auth.php Normal file
View File

@ -0,0 +1,115 @@
<?php
return [
/*
|--------------------------------------------------------------------------
| Authentication Defaults
|--------------------------------------------------------------------------
|
| This option defines the default authentication "guard" and password
| reset "broker" for your application. You may change these values
| as required, but they're a perfect start for most applications.
|
*/
'defaults' => [
'guard' => env('AUTH_GUARD', 'web'),
'passwords' => env('AUTH_PASSWORD_BROKER', 'users'),
],
/*
|--------------------------------------------------------------------------
| Authentication Guards
|--------------------------------------------------------------------------
|
| Next, you may define every authentication guard for your application.
| Of course, a great default configuration has been defined for you
| which utilizes session storage plus the Eloquent user provider.
|
| All authentication guards have a user provider, which defines how the
| users are actually retrieved out of your database or other storage
| system used by the application. Typically, Eloquent is utilized.
|
| Supported: "session"
|
*/
'guards' => [
'web' => [
'driver' => 'session',
'provider' => 'users',
],
],
/*
|--------------------------------------------------------------------------
| User Providers
|--------------------------------------------------------------------------
|
| All authentication guards have a user provider, which defines how the
| users are actually retrieved out of your database or other storage
| system used by the application. Typically, Eloquent is utilized.
|
| If you have multiple user tables or models you may configure multiple
| providers to represent the model / table. These providers may then
| be assigned to any extra authentication guards you have defined.
|
| Supported: "database", "eloquent"
|
*/
'providers' => [
'users' => [
'driver' => 'eloquent',
'model' => env('AUTH_MODEL', App\Models\User::class),
],
// 'users' => [
// 'driver' => 'database',
// 'table' => 'users',
// ],
],
/*
|--------------------------------------------------------------------------
| Resetting Passwords
|--------------------------------------------------------------------------
|
| These configuration options specify the behavior of Laravel's password
| reset functionality, including the table utilized for token storage
| and the user provider that is invoked to actually retrieve users.
|
| The expiry time is the number of minutes that each reset token will be
| considered valid. This security feature keeps tokens short-lived so
| they have less time to be guessed. You may change this as needed.
|
| The throttle setting is the number of seconds a user must wait before
| generating more password reset tokens. This prevents the user from
| quickly generating a very large amount of password reset tokens.
|
*/
'passwords' => [
'users' => [
'provider' => 'users',
'table' => env('AUTH_PASSWORD_RESET_TOKEN_TABLE', 'password_reset_tokens'),
'expire' => 60,
'throttle' => 60,
],
],
/*
|--------------------------------------------------------------------------
| Password Confirmation Timeout
|--------------------------------------------------------------------------
|
| Here you may define the amount of seconds before a password confirmation
| window expires and users are asked to re-enter their password via the
| confirmation screen. By default, the timeout lasts for three hours.
|
*/
'password_timeout' => env('AUTH_PASSWORD_TIMEOUT', 10800),
];

108
dashboard/config/cache.php Normal file
View File

@ -0,0 +1,108 @@
<?php
use Illuminate\Support\Str;
return [
/*
|--------------------------------------------------------------------------
| Default Cache Store
|--------------------------------------------------------------------------
|
| This option controls the default cache store that will be used by the
| framework. This connection is utilized if another isn't explicitly
| specified when running a cache operation inside the application.
|
*/
'default' => env('CACHE_STORE', 'database'),
/*
|--------------------------------------------------------------------------
| Cache Stores
|--------------------------------------------------------------------------
|
| Here you may define all of the cache "stores" for your application as
| well as their drivers. You may even define multiple stores for the
| same cache driver to group types of items stored in your caches.
|
| Supported drivers: "array", "database", "file", "memcached",
| "redis", "dynamodb", "octane", "null"
|
*/
'stores' => [
'array' => [
'driver' => 'array',
'serialize' => false,
],
'database' => [
'driver' => 'database',
'connection' => env('DB_CACHE_CONNECTION'),
'table' => env('DB_CACHE_TABLE', 'cache'),
'lock_connection' => env('DB_CACHE_LOCK_CONNECTION'),
'lock_table' => env('DB_CACHE_LOCK_TABLE'),
],
'file' => [
'driver' => 'file',
'path' => storage_path('framework/cache/data'),
'lock_path' => storage_path('framework/cache/data'),
],
'memcached' => [
'driver' => 'memcached',
'persistent_id' => env('MEMCACHED_PERSISTENT_ID'),
'sasl' => [
env('MEMCACHED_USERNAME'),
env('MEMCACHED_PASSWORD'),
],
'options' => [
// Memcached::OPT_CONNECT_TIMEOUT => 2000,
],
'servers' => [
[
'host' => env('MEMCACHED_HOST', '127.0.0.1'),
'port' => env('MEMCACHED_PORT', 11211),
'weight' => 100,
],
],
],
'redis' => [
'driver' => 'redis',
'connection' => env('REDIS_CACHE_CONNECTION', 'cache'),
'lock_connection' => env('REDIS_CACHE_LOCK_CONNECTION', 'default'),
],
'dynamodb' => [
'driver' => 'dynamodb',
'key' => env('AWS_ACCESS_KEY_ID'),
'secret' => env('AWS_SECRET_ACCESS_KEY'),
'region' => env('AWS_DEFAULT_REGION', 'us-east-1'),
'table' => env('DYNAMODB_CACHE_TABLE', 'cache'),
'endpoint' => env('DYNAMODB_ENDPOINT'),
],
'octane' => [
'driver' => 'octane',
],
],
/*
|--------------------------------------------------------------------------
| Cache Key Prefix
|--------------------------------------------------------------------------
|
| When utilizing the APC, database, memcached, Redis, and DynamoDB cache
| stores, there might be other applications using the same cache. For
| that reason, you may prefix every cache key to avoid collisions.
|
*/
'prefix' => env('CACHE_PREFIX', Str::slug(env('APP_NAME', 'laravel'), '_').'_cache_'),
];

View File

@ -0,0 +1,173 @@
<?php
use Illuminate\Support\Str;
return [
/*
|--------------------------------------------------------------------------
| Default Database Connection Name
|--------------------------------------------------------------------------
|
| Here you may specify which of the database connections below you wish
| to use as your default connection for database operations. This is
| the connection which will be utilized unless another connection
| is explicitly specified when you execute a query / statement.
|
*/
'default' => env('DB_CONNECTION', 'sqlite'),
/*
|--------------------------------------------------------------------------
| Database Connections
|--------------------------------------------------------------------------
|
| Below are all of the database connections defined for your application.
| An example configuration is provided for each database system which
| is supported by Laravel. You're free to add / remove connections.
|
*/
'connections' => [
'sqlite' => [
'driver' => 'sqlite',
'url' => env('DB_URL'),
'database' => env('DB_DATABASE', database_path('database.sqlite')),
'prefix' => '',
'foreign_key_constraints' => env('DB_FOREIGN_KEYS', true),
'busy_timeout' => null,
'journal_mode' => null,
'synchronous' => null,
],
'mysql' => [
'driver' => 'mysql',
'url' => env('DB_URL'),
'host' => env('DB_HOST', '127.0.0.1'),
'port' => env('DB_PORT', '3306'),
'database' => env('DB_DATABASE', 'laravel'),
'username' => env('DB_USERNAME', 'root'),
'password' => env('DB_PASSWORD', ''),
'unix_socket' => env('DB_SOCKET', ''),
'charset' => env('DB_CHARSET', 'utf8mb4'),
'collation' => env('DB_COLLATION', 'utf8mb4_unicode_ci'),
'prefix' => '',
'prefix_indexes' => true,
'strict' => true,
'engine' => null,
'options' => extension_loaded('pdo_mysql') ? array_filter([
PDO::MYSQL_ATTR_SSL_CA => env('MYSQL_ATTR_SSL_CA'),
]) : [],
],
'mariadb' => [
'driver' => 'mariadb',
'url' => env('DB_URL'),
'host' => env('DB_HOST', '127.0.0.1'),
'port' => env('DB_PORT', '3306'),
'database' => env('DB_DATABASE', 'laravel'),
'username' => env('DB_USERNAME', 'root'),
'password' => env('DB_PASSWORD', ''),
'unix_socket' => env('DB_SOCKET', ''),
'charset' => env('DB_CHARSET', 'utf8mb4'),
'collation' => env('DB_COLLATION', 'utf8mb4_unicode_ci'),
'prefix' => '',
'prefix_indexes' => true,
'strict' => true,
'engine' => null,
'options' => extension_loaded('pdo_mysql') ? array_filter([
PDO::MYSQL_ATTR_SSL_CA => env('MYSQL_ATTR_SSL_CA'),
]) : [],
],
'pgsql' => [
'driver' => 'pgsql',
'url' => env('DB_URL'),
'host' => env('DB_HOST', '127.0.0.1'),
'port' => env('DB_PORT', '5432'),
'database' => env('DB_DATABASE', 'laravel'),
'username' => env('DB_USERNAME', 'root'),
'password' => env('DB_PASSWORD', ''),
'charset' => env('DB_CHARSET', 'utf8'),
'prefix' => '',
'prefix_indexes' => true,
'search_path' => 'public',
'sslmode' => 'prefer',
],
'sqlsrv' => [
'driver' => 'sqlsrv',
'url' => env('DB_URL'),
'host' => env('DB_HOST', 'localhost'),
'port' => env('DB_PORT', '1433'),
'database' => env('DB_DATABASE', 'laravel'),
'username' => env('DB_USERNAME', 'root'),
'password' => env('DB_PASSWORD', ''),
'charset' => env('DB_CHARSET', 'utf8'),
'prefix' => '',
'prefix_indexes' => true,
// 'encrypt' => env('DB_ENCRYPT', 'yes'),
// 'trust_server_certificate' => env('DB_TRUST_SERVER_CERTIFICATE', 'false'),
],
],
/*
|--------------------------------------------------------------------------
| Migration Repository Table
|--------------------------------------------------------------------------
|
| This table keeps track of all the migrations that have already run for
| your application. Using this information, we can determine which of
| the migrations on disk haven't actually been run on the database.
|
*/
'migrations' => [
'table' => 'migrations',
'update_date_on_publish' => true,
],
/*
|--------------------------------------------------------------------------
| Redis Databases
|--------------------------------------------------------------------------
|
| Redis is an open source, fast, and advanced key-value store that also
| provides a richer body of commands than a typical key-value system
| such as Memcached. You may define your connection settings here.
|
*/
'redis' => [
'client' => env('REDIS_CLIENT', 'phpredis'),
'options' => [
'cluster' => env('REDIS_CLUSTER', 'redis'),
'prefix' => env('REDIS_PREFIX', Str::slug(env('APP_NAME', 'laravel'), '_').'_database_'),
],
'default' => [
'url' => env('REDIS_URL'),
'host' => env('REDIS_HOST', '127.0.0.1'),
'username' => env('REDIS_USERNAME'),
'password' => env('REDIS_PASSWORD'),
'port' => env('REDIS_PORT', '6379'),
'database' => env('REDIS_DB', '0'),
],
'cache' => [
'url' => env('REDIS_URL'),
'host' => env('REDIS_HOST', '127.0.0.1'),
'username' => env('REDIS_USERNAME'),
'password' => env('REDIS_PASSWORD'),
'port' => env('REDIS_PORT', '6379'),
'database' => env('REDIS_CACHE_DB', '1'),
],
],
];

View File

@ -0,0 +1,77 @@
<?php
return [
/*
|--------------------------------------------------------------------------
| Default Filesystem Disk
|--------------------------------------------------------------------------
|
| Here you may specify the default filesystem disk that should be used
| by the framework. The "local" disk, as well as a variety of cloud
| based disks are available to your application for file storage.
|
*/
'default' => env('FILESYSTEM_DISK', 'local'),
/*
|--------------------------------------------------------------------------
| Filesystem Disks
|--------------------------------------------------------------------------
|
| Below you may configure as many filesystem disks as necessary, and you
| may even configure multiple disks for the same driver. Examples for
| most supported storage drivers are configured here for reference.
|
| Supported drivers: "local", "ftp", "sftp", "s3"
|
*/
'disks' => [
'local' => [
'driver' => 'local',
'root' => storage_path('app/private'),
'serve' => true,
'throw' => false,
],
'public' => [
'driver' => 'local',
'root' => storage_path('app/public'),
'url' => env('APP_URL').'/storage',
'visibility' => 'public',
'throw' => false,
],
's3' => [
'driver' => 's3',
'key' => env('AWS_ACCESS_KEY_ID'),
'secret' => env('AWS_SECRET_ACCESS_KEY'),
'region' => env('AWS_DEFAULT_REGION'),
'bucket' => env('AWS_BUCKET'),
'url' => env('AWS_URL'),
'endpoint' => env('AWS_ENDPOINT'),
'use_path_style_endpoint' => env('AWS_USE_PATH_STYLE_ENDPOINT', false),
'throw' => false,
],
],
/*
|--------------------------------------------------------------------------
| Symbolic Links
|--------------------------------------------------------------------------
|
| Here you may configure the symbolic links that will be created when the
| `storage:link` Artisan command is executed. The array keys should be
| the locations of the links and the values should be their targets.
|
*/
'links' => [
public_path('storage') => storage_path('app/public'),
],
];

View File

@ -0,0 +1,132 @@
<?php
use Monolog\Handler\NullHandler;
use Monolog\Handler\StreamHandler;
use Monolog\Handler\SyslogUdpHandler;
use Monolog\Processor\PsrLogMessageProcessor;
return [
/*
|--------------------------------------------------------------------------
| Default Log Channel
|--------------------------------------------------------------------------
|
| This option defines the default log channel that is utilized to write
| messages to your logs. The value provided here should match one of
| the channels present in the list of "channels" configured below.
|
*/
'default' => env('LOG_CHANNEL', 'stack'),
/*
|--------------------------------------------------------------------------
| Deprecations Log Channel
|--------------------------------------------------------------------------
|
| This option controls the log channel that should be used to log warnings
| regarding deprecated PHP and library features. This allows you to get
| your application ready for upcoming major versions of dependencies.
|
*/
'deprecations' => [
'channel' => env('LOG_DEPRECATIONS_CHANNEL', 'null'),
'trace' => env('LOG_DEPRECATIONS_TRACE', false),
],
/*
|--------------------------------------------------------------------------
| Log Channels
|--------------------------------------------------------------------------
|
| Here you may configure the log channels for your application. Laravel
| utilizes the Monolog PHP logging library, which includes a variety
| of powerful log handlers and formatters that you're free to use.
|
| Available drivers: "single", "daily", "slack", "syslog",
| "errorlog", "monolog", "custom", "stack"
|
*/
'channels' => [
'stack' => [
'driver' => 'stack',
'channels' => explode(',', env('LOG_STACK', 'single')),
'ignore_exceptions' => false,
],
'single' => [
'driver' => 'single',
'path' => storage_path('logs/laravel.log'),
'level' => env('LOG_LEVEL', 'debug'),
'replace_placeholders' => true,
],
'daily' => [
'driver' => 'daily',
'path' => storage_path('logs/laravel.log'),
'level' => env('LOG_LEVEL', 'debug'),
'days' => env('LOG_DAILY_DAYS', 14),
'replace_placeholders' => true,
],
'slack' => [
'driver' => 'slack',
'url' => env('LOG_SLACK_WEBHOOK_URL'),
'username' => env('LOG_SLACK_USERNAME', 'Laravel Log'),
'emoji' => env('LOG_SLACK_EMOJI', ':boom:'),
'level' => env('LOG_LEVEL', 'critical'),
'replace_placeholders' => true,
],
'papertrail' => [
'driver' => 'monolog',
'level' => env('LOG_LEVEL', 'debug'),
'handler' => env('LOG_PAPERTRAIL_HANDLER', SyslogUdpHandler::class),
'handler_with' => [
'host' => env('PAPERTRAIL_URL'),
'port' => env('PAPERTRAIL_PORT'),
'connectionString' => 'tls://'.env('PAPERTRAIL_URL').':'.env('PAPERTRAIL_PORT'),
],
'processors' => [PsrLogMessageProcessor::class],
],
'stderr' => [
'driver' => 'monolog',
'level' => env('LOG_LEVEL', 'debug'),
'handler' => StreamHandler::class,
'formatter' => env('LOG_STDERR_FORMATTER'),
'with' => [
'stream' => 'php://stderr',
],
'processors' => [PsrLogMessageProcessor::class],
],
'syslog' => [
'driver' => 'syslog',
'level' => env('LOG_LEVEL', 'debug'),
'facility' => env('LOG_SYSLOG_FACILITY', LOG_USER),
'replace_placeholders' => true,
],
'errorlog' => [
'driver' => 'errorlog',
'level' => env('LOG_LEVEL', 'debug'),
'replace_placeholders' => true,
],
'null' => [
'driver' => 'monolog',
'handler' => NullHandler::class,
],
'emergency' => [
'path' => storage_path('logs/laravel.log'),
],
],
];

116
dashboard/config/mail.php Normal file
View File

@ -0,0 +1,116 @@
<?php
return [
/*
|--------------------------------------------------------------------------
| Default Mailer
|--------------------------------------------------------------------------
|
| This option controls the default mailer that is used to send all email
| messages unless another mailer is explicitly specified when sending
| the message. All additional mailers can be configured within the
| "mailers" array. Examples of each type of mailer are provided.
|
*/
'default' => env('MAIL_MAILER', 'log'),
/*
|--------------------------------------------------------------------------
| Mailer Configurations
|--------------------------------------------------------------------------
|
| Here you may configure all of the mailers used by your application plus
| their respective settings. Several examples have been configured for
| you and you are free to add your own as your application requires.
|
| Laravel supports a variety of mail "transport" drivers that can be used
| when delivering an email. You may specify which one you're using for
| your mailers below. You may also add additional mailers if needed.
|
| Supported: "smtp", "sendmail", "mailgun", "ses", "ses-v2",
| "postmark", "resend", "log", "array",
| "failover", "roundrobin"
|
*/
'mailers' => [
'smtp' => [
'transport' => 'smtp',
'scheme' => env('MAIL_SCHEME'),
'url' => env('MAIL_URL'),
'host' => env('MAIL_HOST', '127.0.0.1'),
'port' => env('MAIL_PORT', 2525),
'username' => env('MAIL_USERNAME'),
'password' => env('MAIL_PASSWORD'),
'timeout' => null,
'local_domain' => env('MAIL_EHLO_DOMAIN', parse_url(env('APP_URL', 'http://localhost'), PHP_URL_HOST)),
],
'ses' => [
'transport' => 'ses',
],
'postmark' => [
'transport' => 'postmark',
// 'message_stream_id' => env('POSTMARK_MESSAGE_STREAM_ID'),
// 'client' => [
// 'timeout' => 5,
// ],
],
'resend' => [
'transport' => 'resend',
],
'sendmail' => [
'transport' => 'sendmail',
'path' => env('MAIL_SENDMAIL_PATH', '/usr/sbin/sendmail -bs -i'),
],
'log' => [
'transport' => 'log',
'channel' => env('MAIL_LOG_CHANNEL'),
],
'array' => [
'transport' => 'array',
],
'failover' => [
'transport' => 'failover',
'mailers' => [
'smtp',
'log',
],
],
'roundrobin' => [
'transport' => 'roundrobin',
'mailers' => [
'ses',
'postmark',
],
],
],
/*
|--------------------------------------------------------------------------
| Global "From" Address
|--------------------------------------------------------------------------
|
| You may wish for all emails sent by your application to be sent from
| the same address. Here you may specify a name and address that is
| used globally for all emails that are sent by your application.
|
*/
'from' => [
'address' => env('MAIL_FROM_ADDRESS', 'hello@example.com'),
'name' => env('MAIL_FROM_NAME', 'Example'),
],
];

112
dashboard/config/queue.php Normal file
View File

@ -0,0 +1,112 @@
<?php
return [
/*
|--------------------------------------------------------------------------
| Default Queue Connection Name
|--------------------------------------------------------------------------
|
| Laravel's queue supports a variety of backends via a single, unified
| API, giving you convenient access to each backend using identical
| syntax for each. The default queue connection is defined below.
|
*/
'default' => env('QUEUE_CONNECTION', 'database'),
/*
|--------------------------------------------------------------------------
| Queue Connections
|--------------------------------------------------------------------------
|
| Here you may configure the connection options for every queue backend
| used by your application. An example configuration is provided for
| each backend supported by Laravel. You're also free to add more.
|
| Drivers: "sync", "database", "beanstalkd", "sqs", "redis", "null"
|
*/
'connections' => [
'sync' => [
'driver' => 'sync',
],
'database' => [
'driver' => 'database',
'connection' => env('DB_QUEUE_CONNECTION'),
'table' => env('DB_QUEUE_TABLE', 'jobs'),
'queue' => env('DB_QUEUE', 'default'),
'retry_after' => (int) env('DB_QUEUE_RETRY_AFTER', 90),
'after_commit' => false,
],
'beanstalkd' => [
'driver' => 'beanstalkd',
'host' => env('BEANSTALKD_QUEUE_HOST', 'localhost'),
'queue' => env('BEANSTALKD_QUEUE', 'default'),
'retry_after' => (int) env('BEANSTALKD_QUEUE_RETRY_AFTER', 90),
'block_for' => 0,
'after_commit' => false,
],
'sqs' => [
'driver' => 'sqs',
'key' => env('AWS_ACCESS_KEY_ID'),
'secret' => env('AWS_SECRET_ACCESS_KEY'),
'prefix' => env('SQS_PREFIX', 'https://sqs.us-east-1.amazonaws.com/your-account-id'),
'queue' => env('SQS_QUEUE', 'default'),
'suffix' => env('SQS_SUFFIX'),
'region' => env('AWS_DEFAULT_REGION', 'us-east-1'),
'after_commit' => false,
],
'redis' => [
'driver' => 'redis',
'connection' => env('REDIS_QUEUE_CONNECTION', 'default'),
'queue' => env('REDIS_QUEUE', 'default'),
'retry_after' => (int) env('REDIS_QUEUE_RETRY_AFTER', 90),
'block_for' => null,
'after_commit' => false,
],
],
/*
|--------------------------------------------------------------------------
| Job Batching
|--------------------------------------------------------------------------
|
| The following options configure the database and table that store job
| batching information. These options can be updated to any database
| connection and table which has been defined by your application.
|
*/
'batching' => [
'database' => env('DB_CONNECTION', 'sqlite'),
'table' => 'job_batches',
],
/*
|--------------------------------------------------------------------------
| Failed Queue Jobs
|--------------------------------------------------------------------------
|
| These options configure the behavior of failed queue job logging so you
| can control how and where failed jobs are stored. Laravel ships with
| support for storing failed jobs in a simple file or in a database.
|
| Supported drivers: "database-uuids", "dynamodb", "file", "null"
|
*/
'failed' => [
'driver' => env('QUEUE_FAILED_DRIVER', 'database-uuids'),
'database' => env('DB_CONNECTION', 'sqlite'),
'table' => 'failed_jobs',
],
];

View File

@ -0,0 +1,38 @@
<?php
return [
/*
|--------------------------------------------------------------------------
| Third Party Services
|--------------------------------------------------------------------------
|
| This file is for storing the credentials for third party services such
| as Mailgun, Postmark, AWS and more. This file provides the de facto
| location for this type of information, allowing packages to have
| a conventional file to locate the various service credentials.
|
*/
'postmark' => [
'token' => env('POSTMARK_TOKEN'),
],
'ses' => [
'key' => env('AWS_ACCESS_KEY_ID'),
'secret' => env('AWS_SECRET_ACCESS_KEY'),
'region' => env('AWS_DEFAULT_REGION', 'us-east-1'),
],
'resend' => [
'key' => env('RESEND_KEY'),
],
'slack' => [
'notifications' => [
'bot_user_oauth_token' => env('SLACK_BOT_USER_OAUTH_TOKEN'),
'channel' => env('SLACK_BOT_USER_DEFAULT_CHANNEL'),
],
],
];

View File

@ -0,0 +1,217 @@
<?php
use Illuminate\Support\Str;
return [
/*
|--------------------------------------------------------------------------
| Default Session Driver
|--------------------------------------------------------------------------
|
| This option determines the default session driver that is utilized for
| incoming requests. Laravel supports a variety of storage options to
| persist session data. Database storage is a great default choice.
|
| Supported: "file", "cookie", "database", "apc",
| "memcached", "redis", "dynamodb", "array"
|
*/
'driver' => env('SESSION_DRIVER', 'database'),
/*
|--------------------------------------------------------------------------
| Session Lifetime
|--------------------------------------------------------------------------
|
| Here you may specify the number of minutes that you wish the session
| to be allowed to remain idle before it expires. If you want them
| to expire immediately when the browser is closed then you may
| indicate that via the expire_on_close configuration option.
|
*/
'lifetime' => env('SESSION_LIFETIME', 120),
'expire_on_close' => env('SESSION_EXPIRE_ON_CLOSE', false),
/*
|--------------------------------------------------------------------------
| Session Encryption
|--------------------------------------------------------------------------
|
| This option allows you to easily specify that all of your session data
| should be encrypted before it's stored. All encryption is performed
| automatically by Laravel and you may use the session like normal.
|
*/
'encrypt' => env('SESSION_ENCRYPT', false),
/*
|--------------------------------------------------------------------------
| Session File Location
|--------------------------------------------------------------------------
|
| When utilizing the "file" session driver, the session files are placed
| on disk. The default storage location is defined here; however, you
| are free to provide another location where they should be stored.
|
*/
'files' => storage_path('framework/sessions'),
/*
|--------------------------------------------------------------------------
| Session Database Connection
|--------------------------------------------------------------------------
|
| When using the "database" or "redis" session drivers, you may specify a
| connection that should be used to manage these sessions. This should
| correspond to a connection in your database configuration options.
|
*/
'connection' => env('SESSION_CONNECTION'),
/*
|--------------------------------------------------------------------------
| Session Database Table
|--------------------------------------------------------------------------
|
| When using the "database" session driver, you may specify the table to
| be used to store sessions. Of course, a sensible default is defined
| for you; however, you're welcome to change this to another table.
|
*/
'table' => env('SESSION_TABLE', 'sessions'),
/*
|--------------------------------------------------------------------------
| Session Cache Store
|--------------------------------------------------------------------------
|
| When using one of the framework's cache driven session backends, you may
| define the cache store which should be used to store the session data
| between requests. This must match one of your defined cache stores.
|
| Affects: "apc", "dynamodb", "memcached", "redis"
|
*/
'store' => env('SESSION_STORE'),
/*
|--------------------------------------------------------------------------
| Session Sweeping Lottery
|--------------------------------------------------------------------------
|
| Some session drivers must manually sweep their storage location to get
| rid of old sessions from storage. Here are the chances that it will
| happen on a given request. By default, the odds are 2 out of 100.
|
*/
'lottery' => [2, 100],
/*
|--------------------------------------------------------------------------
| Session Cookie Name
|--------------------------------------------------------------------------
|
| Here you may change the name of the session cookie that is created by
| the framework. Typically, you should not need to change this value
| since doing so does not grant a meaningful security improvement.
|
*/
'cookie' => env(
'SESSION_COOKIE',
Str::slug(env('APP_NAME', 'laravel'), '_').'_session'
),
/*
|--------------------------------------------------------------------------
| Session Cookie Path
|--------------------------------------------------------------------------
|
| The session cookie path determines the path for which the cookie will
| be regarded as available. Typically, this will be the root path of
| your application, but you're free to change this when necessary.
|
*/
'path' => env('SESSION_PATH', '/'),
/*
|--------------------------------------------------------------------------
| Session Cookie Domain
|--------------------------------------------------------------------------
|
| This value determines the domain and subdomains the session cookie is
| available to. By default, the cookie will be available to the root
| domain and all subdomains. Typically, this shouldn't be changed.
|
*/
'domain' => env('SESSION_DOMAIN'),
/*
|--------------------------------------------------------------------------
| HTTPS Only Cookies
|--------------------------------------------------------------------------
|
| By setting this option to true, session cookies will only be sent back
| to the server if the browser has a HTTPS connection. This will keep
| the cookie from being sent to you when it can't be done securely.
|
*/
'secure' => env('SESSION_SECURE_COOKIE'),
/*
|--------------------------------------------------------------------------
| HTTP Access Only
|--------------------------------------------------------------------------
|
| Setting this value to true will prevent JavaScript from accessing the
| value of the cookie and the cookie will only be accessible through
| the HTTP protocol. It's unlikely you should disable this option.
|
*/
'http_only' => env('SESSION_HTTP_ONLY', true),
/*
|--------------------------------------------------------------------------
| Same-Site Cookies
|--------------------------------------------------------------------------
|
| This option determines how your cookies behave when cross-site requests
| take place, and can be used to mitigate CSRF attacks. By default, we
| will set this value to "lax" to permit secure cross-site requests.
|
| See: https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Set-Cookie#samesitesamesite-value
|
| Supported: "lax", "strict", "none", null
|
*/
'same_site' => env('SESSION_SAME_SITE', 'lax'),
/*
|--------------------------------------------------------------------------
| Partitioned Cookies
|--------------------------------------------------------------------------
|
| Setting this value to true will tie the cookie to the top-level site for
| a cross-site context. Partitioned cookies are accepted by the browser
| when flagged "secure" and the Same-Site attribute is set to "none".
|
*/
'partitioned' => env('SESSION_PARTITIONED_COOKIE', false),
];

1
dashboard/database/.gitignore vendored Normal file
View File

@ -0,0 +1 @@
*.sqlite*

View File

@ -0,0 +1,44 @@
<?php
namespace Database\Factories;
use Illuminate\Database\Eloquent\Factories\Factory;
use Illuminate\Support\Facades\Hash;
use Illuminate\Support\Str;
/**
* @extends \Illuminate\Database\Eloquent\Factories\Factory<\App\Models\User>
*/
class UserFactory extends Factory
{
/**
* The current password being used by the factory.
*/
protected static ?string $password;
/**
* Define the model's default state.
*
* @return array<string, mixed>
*/
public function definition(): array
{
return [
'name' => fake()->name(),
'email' => fake()->unique()->safeEmail(),
'email_verified_at' => now(),
'password' => static::$password ??= Hash::make('password'),
'remember_token' => Str::random(10),
];
}
/**
* Indicate that the model's email address should be unverified.
*/
public function unverified(): static
{
return $this->state(fn (array $attributes) => [
'email_verified_at' => null,
]);
}
}

View File

@ -0,0 +1,49 @@
<?php
use Illuminate\Database\Migrations\Migration;
use Illuminate\Database\Schema\Blueprint;
use Illuminate\Support\Facades\Schema;
return new class extends Migration
{
/**
* Run the migrations.
*/
public function up(): void
{
Schema::create('users', function (Blueprint $table) {
$table->id();
$table->string('name');
$table->string('email')->unique();
$table->timestamp('email_verified_at')->nullable();
$table->string('password');
$table->rememberToken();
$table->timestamps();
});
Schema::create('password_reset_tokens', function (Blueprint $table) {
$table->string('email')->primary();
$table->string('token');
$table->timestamp('created_at')->nullable();
});
Schema::create('sessions', function (Blueprint $table) {
$table->string('id')->primary();
$table->foreignId('user_id')->nullable()->index();
$table->string('ip_address', 45)->nullable();
$table->text('user_agent')->nullable();
$table->longText('payload');
$table->integer('last_activity')->index();
});
}
/**
* Reverse the migrations.
*/
public function down(): void
{
Schema::dropIfExists('users');
Schema::dropIfExists('password_reset_tokens');
Schema::dropIfExists('sessions');
}
};

View File

@ -0,0 +1,35 @@
<?php
use Illuminate\Database\Migrations\Migration;
use Illuminate\Database\Schema\Blueprint;
use Illuminate\Support\Facades\Schema;
return new class extends Migration
{
/**
* Run the migrations.
*/
public function up(): void
{
Schema::create('cache', function (Blueprint $table) {
$table->string('key')->primary();
$table->mediumText('value');
$table->integer('expiration');
});
Schema::create('cache_locks', function (Blueprint $table) {
$table->string('key')->primary();
$table->string('owner');
$table->integer('expiration');
});
}
/**
* Reverse the migrations.
*/
public function down(): void
{
Schema::dropIfExists('cache');
Schema::dropIfExists('cache_locks');
}
};

View File

@ -0,0 +1,57 @@
<?php
use Illuminate\Database\Migrations\Migration;
use Illuminate\Database\Schema\Blueprint;
use Illuminate\Support\Facades\Schema;
return new class extends Migration
{
/**
* Run the migrations.
*/
public function up(): void
{
Schema::create('jobs', function (Blueprint $table) {
$table->id();
$table->string('queue')->index();
$table->longText('payload');
$table->unsignedTinyInteger('attempts');
$table->unsignedInteger('reserved_at')->nullable();
$table->unsignedInteger('available_at');
$table->unsignedInteger('created_at');
});
Schema::create('job_batches', function (Blueprint $table) {
$table->string('id')->primary();
$table->string('name');
$table->integer('total_jobs');
$table->integer('pending_jobs');
$table->integer('failed_jobs');
$table->longText('failed_job_ids');
$table->mediumText('options')->nullable();
$table->integer('cancelled_at')->nullable();
$table->integer('created_at');
$table->integer('finished_at')->nullable();
});
Schema::create('failed_jobs', function (Blueprint $table) {
$table->id();
$table->string('uuid')->unique();
$table->text('connection');
$table->text('queue');
$table->longText('payload');
$table->longText('exception');
$table->timestamp('failed_at')->useCurrent();
});
}
/**
* Reverse the migrations.
*/
public function down(): void
{
Schema::dropIfExists('jobs');
Schema::dropIfExists('job_batches');
Schema::dropIfExists('failed_jobs');
}
};

View File

@ -0,0 +1,23 @@
<?php
namespace Database\Seeders;
use App\Models\User;
// use Illuminate\Database\Console\Seeds\WithoutModelEvents;
use Illuminate\Database\Seeder;
class DatabaseSeeder extends Seeder
{
/**
* Seed the application's database.
*/
public function run(): void
{
// User::factory(10)->create();
User::factory()->create([
'name' => 'Test User',
'email' => 'test@example.com',
]);
}
}

2977
dashboard/package-lock.json generated Normal file

File diff suppressed because it is too large Load Diff

23
dashboard/package.json Normal file
View File

@ -0,0 +1,23 @@
{
"private": true,
"type": "module",
"scripts": {
"build": "vite build",
"dev": "vite"
},
"devDependencies": {
"autoprefixer": "^10.4.20",
"axios": "^1.7.4",
"concurrently": "^9.0.1",
"laravel-vite-plugin": "^1.0",
"postcss": "^8.4.47",
"tailwindcss": "^3.4.13",
"vite": "^5.0"
},
"dependencies": {
"@patternfly/patternfly": "^6.0.0",
"@picocss/pico": "^2.0.6",
"echarts": "^5.5.1",
"leaflet": "^1.9.4"
}
}

33
dashboard/phpunit.xml Normal file
View File

@ -0,0 +1,33 @@
<?xml version="1.0" encoding="UTF-8"?>
<phpunit xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:noNamespaceSchemaLocation="vendor/phpunit/phpunit/phpunit.xsd"
bootstrap="vendor/autoload.php"
colors="true"
>
<testsuites>
<testsuite name="Unit">
<directory>tests/Unit</directory>
</testsuite>
<testsuite name="Feature">
<directory>tests/Feature</directory>
</testsuite>
</testsuites>
<source>
<include>
<directory>app</directory>
</include>
</source>
<php>
<env name="APP_ENV" value="testing"/>
<env name="APP_MAINTENANCE_DRIVER" value="file"/>
<env name="BCRYPT_ROUNDS" value="4"/>
<env name="CACHE_STORE" value="array"/>
<!-- <env name="DB_CONNECTION" value="sqlite"/> -->
<!-- <env name="DB_DATABASE" value=":memory:"/> -->
<env name="MAIL_MAILER" value="array"/>
<env name="PULSE_ENABLED" value="false"/>
<env name="QUEUE_CONNECTION" value="sync"/>
<env name="SESSION_DRIVER" value="array"/>
<env name="TELESCOPE_ENABLED" value="false"/>
</php>
</phpunit>

View File

@ -0,0 +1,6 @@
export default {
plugins: {
tailwindcss: {},
autoprefixer: {},
},
};

View File

@ -0,0 +1,21 @@
<IfModule mod_rewrite.c>
<IfModule mod_negotiation.c>
Options -MultiViews -Indexes
</IfModule>
RewriteEngine On
# Handle Authorization Header
RewriteCond %{HTTP:Authorization} .
RewriteRule .* - [E=HTTP_AUTHORIZATION:%{HTTP:Authorization}]
# Redirect Trailing Slashes If Not A Folder...
RewriteCond %{REQUEST_FILENAME} !-d
RewriteCond %{REQUEST_URI} (.+)/$
RewriteRule ^ %1 [L,R=301]
# Send Requests To Front Controller...
RewriteCond %{REQUEST_FILENAME} !-d
RewriteCond %{REQUEST_FILENAME} !-f
RewriteRule ^ index.php [L]
</IfModule>

View File

@ -0,0 +1,17 @@
<?php
use Illuminate\Http\Request;
define('LARAVEL_START', microtime(true));
// Determine if the application is in maintenance mode...
if (file_exists($maintenance = __DIR__.'/../storage/framework/maintenance.php')) {
require $maintenance;
}
// Register the Composer autoloader...
require __DIR__.'/../vendor/autoload.php';
// Bootstrap Laravel and handle the request...
(require_once __DIR__.'/../bootstrap/app.php')
->handleRequest(Request::capture());

View File

@ -0,0 +1,2 @@
User-agent: *
Disallow:

View File

@ -0,0 +1,188 @@
/* 1. Use a more-intuitive box-sizing model */
*, *::before, *::after {
box-sizing: border-box;
}
/* 2. Remove default margin */
* {
margin: 0;
font-family: sans-serif;
}
body {
/* 3. Add accessible line-height */
line-height: 1.5;
/* 4. Improve text rendering */
-webkit-font-smoothing: antialiased;
padding: 0 1em;
height: 100vh;
background-image: radial-gradient(73% 147%, #EADFDF 59%, #ECE2DF 100%), radial-gradient(91% 146%, rgba(255,255,255,0.50) 47%, rgba(0,0,0,0.50) 100%);
background-blend-mode: screen;
}
/* 5. Improve media defaults */
img, picture, video, canvas, svg {
display: block;
max-width: 100%;
}
/* 6. Inherit fonts for form controls */
input, button, textarea, select {
font: inherit;
}
/* 7. Avoid text overflows */
p, h1, h2, h3, h4, h5, h6 {
overflow-wrap: break-word;
}
/* 8. Improve line wrapping */
p {
text-wrap: pretty;
}
h1, h2, h3, h4, h5, h6 {
text-wrap: balance;
}
dt{
font-weight: 600;
}
dd + dt{
margin-top: .2em;
}
span + button{
margin-left: .5em;
}
button[popovertarget]{
background: no-repeat center / .3em #5470c6 url("data:image/svg+xml,%3Csvg xmlns='http://www.w3.org/2000/svg' viewBox='0 0 192 512'%3E%3C!--!Font Awesome Free 6.7.2 by @fontawesome - https://fontawesome.com License - https://fontawesome.com/license/free Copyright 2025 Fonticons, Inc.--%3E%3Cpath fill='%23fff' d='M48 80a48 48 0 1 1 96 0A48 48 0 1 1 48 80zM0 224c0-17.7 14.3-32 32-32l64 0c17.7 0 32 14.3 32 32l0 224 32 0c17.7 0 32 14.3 32 32s-14.3 32-32 32L32 512c-17.7 0-32-14.3-32-32s14.3-32 32-32l32 0 0-192-32 0c-17.7 0-32-14.3-32-32z'/%3E%3C/svg%3E%0A");
cursor: pointer;
display: inline-block;
width: 1.5em;
height: 1.5em;
border-radius: 50%;
border: 1px solid #fff;
}
button[popovertarget]::before{
color: #fff;
font-weight: 700;
}
button[popovertarget]>span{
position: absolute;
left: -999em;
top: -999em;
}
[popover] {
border: none;
border-radius: 1em;
background: #fff;
padding: 1.5em;
border-radius: var(--small-border);
box-shadow: .0625em .0625em .625em rgba(0, 0, 0, 0.1);
max-width: 40em;
top: 4em;
margin: 0 auto;
}
[popover]::backdrop{
background-color: rgba(0,0,0,.5);
}
/*
9. Create a root stacking context
*/
#root, #__next {
isolation: isolate;
}
nav>ul{
list-style: none;
}
body>header{
position: fixed;
top: 0;
left: 0;
width: 100%;
height: 3em;
background: #ccc;
z-index: 99;
display: flex;
align-items: center;
padding: 0 1em;
}
main{
width: 100%;
height: 100vh;
padding: 4em 0 1em;
display: grid;
gap: .5em;
}
body.overview main{
grid-template-columns: repeat(8, minmax(1%, 50%));
grid-template-rows: repeat(4, 1fr);
grid-template-areas:
"chart3 chart3 chart3 chart1 chart1 chart1 chart4 chart4"
"chart3 chart3 chart3 chart1 chart1 chart1 chart4 chart4"
"chart3 chart3 chart3 chart2 chart2 chart2 chart4 chart4"
"chart3 chart3 chart3 chart2 chart2 chart2 chart4 chart4"
}
body.property main{
grid-template-columns: repeat(4, minmax(10%, 50%));
grid-template-rows: repeat(3, 1fr) 4em;
grid-template-areas:
"chart2 chart2 chart5 chart5"
"chart1 chart1 chart3 chart4"
"chart1 chart1 chart3 chart4"
"timeline timeline timeline timeline";
}
article{
background: #f9f9f9;
border: .0625em solid #ccc;
box-shadow: 0 5px 10px rgba(154,160,185,.05), 0 15px 40px rgba(166,173,201,.2);
border-radius: .2em;
display: grid;
}
article.header{
grid-template-columns: 100%;
grid-template-rows: minmax(1%, 10%) 1fr;
padding: .5em 1em 1em .5em;
}
article>header{
display: grid;
grid-template-columns: 1fr 1em;
grid-template-rows: 1fr;
}
article>header>h2{
font-size: .8em;
font-weight: 600;
}
@media(max-width: 960px){
body{
height: auto;
}
main{
height: auto;
grid-template-columns: 100%;
grid-template-rows: repeat(4, minmax(20em, 25em));
}
}

4
dashboard/resources/css/pico.min.css vendored Normal file

File diff suppressed because one or more lines are too long

View File

@ -0,0 +1,4 @@
import * as echarts from 'echarts';
import 'leaflet'
window.echarts = echarts;

View File

@ -0,0 +1,4 @@
import axios from 'axios';
window.axios = axios;
window.axios.defaults.headers.common['X-Requested-With'] = 'XMLHttpRequest';

View File

@ -0,0 +1,17 @@
<!DOCTYPE html>
<html lang="de">
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>Dashboard</title>
@vite(['resources/css/app.css', 'resources/js/app.js', 'node_modules/leaflet/dist/leaflet.css'])
</head>
<body class="@yield('body-class')">
<header>
@yield('header')
</header>
<main>
@yield('main')
</main>
</body>
</html>

View File

@ -0,0 +1,261 @@
@extends('base')
@section('body-class', 'overview')
@section('main')
<article class="header" style="grid-area: chart1;">
<header>
<h2>
Anzahl jemals gefundene Kurzzeitmietobjekte pro Region
</h2>
<button popovertarget="pop1">
<span>Erklärungen zum Diagramm</span>
</button>
<div popover id="pop1">
<p>Das Diagram zeigt...</p>
</div>
<div>
</header>
<div id="chart-props-per-region"></div>
</article>
<article class="header" style="grid-area: chart2;">
<header>
<h2>
Entwicklung der Anzahl jemals gefunden Kurzzeitmietobjekte
</h2>
</header>
<div id="extractions"></div>
</article>
<article style="grid-area: chart4;">
<div id="leaflet"></div>
</article>
<article class="header" style="grid-area: chart3;">
<header>
<h2>
Gesamtauslastung
</h2>
</header>
<div id="chart-heatmap"></div>
</article>
<script type="module">
const sharedOptions = {
basic: {
color: ['#f1eef6','#bdc9e1','#74a9cf','#2b8cbe','#045a8d'],
grid: {
top: 20,
left: 60,
right: 0,
bottom: 50
},
name: (opt) => {
return {
name: opt.name,
nameLocation: opt.location,
nameGap: 24,
nameTextStyle: {
fontWeight: 'bold',
},
}
}
}
}
const extractionDates = {!! json_encode($regionPropertiesCapacities['scrapeDates']) !!};
const chartHeatmap = document.getElementById('chart-heatmap');
const cHeatmap = echarts.init(chartHeatmap);
const cHeatmapOptions = {
tooltip: {
position: 'top'
},
grid: {
top: 30,
right: 0,
bottom: 0,
left: 0
},
dataZoom: [{
type: 'inside'
}
],
xAxis: {
show: false,
name: 'Kurzzeitmietobjekt',
type: 'category',
data: extractionDates,
splitArea: {
show: false
},
axisLabel: {
show: true,
}
},
yAxis: {
show: false,
type: 'category',
data: {!! json_encode($regionPropertiesCapacities['property_ids']) !!},
splitArea: {
show: true
}
},
visualMap: {
type: 'piecewise',
min: 0,
max: 100,
calculable: true,
orient: 'horizontal',
left: 'center',
top: 0,
formatter: (v1, v2) => {
return `${v1}${v2}%`;
},
inRange: {
color: sharedOptions.basic.color,
},
},
series: [
{
name: 'Auslastung',
type: 'heatmap',
blurSize: 0,
data: {!! json_encode($regionPropertiesCapacities['values']) !!},
label: {
show: false
},
tooltip: {
formatter: (data) => {
let v = data.value
return `Kurzzeitmietobjekte-ID: ${data.name}<br />Datum Scraping: ${extractionDates[v[1]]}<br/>Auslastung: ${v[2]} %`
},
},
emphasis: {
itemStyle: {
borderColor: '#000',
borderWidth: 2
}
}
}
]
}
cHeatmap.setOption(cHeatmapOptions);
const chartPropsPerRegion = document.getElementById('chart-props-per-region');
const cPropsPerRegion = echarts.init(chartPropsPerRegion);
const cPropsPerRegionOptions = {
grid: sharedOptions.basic.grid,
xAxis: {
name: 'Region',
nameLocation: 'center',
nameGap: 24,
nameTextStyle: {
fontWeight: 'bold',
},
type: 'category',
data: {!! $propsPerRegion[0] !!}
},
yAxis: {
type: 'value',
name: 'Anzahl Kurzzeitmietobjekte',
nameLocation: 'middle',
nameGap: 38,
nameTextStyle: {
fontWeight: 'bold',
},
},
series: [
{
data: {!! $propsPerRegion[1] !!},
type: 'bar'
}
]
};
cPropsPerRegion.setOption(cPropsPerRegionOptions);
const chartExtractions = document.getElementById('extractions');
const cExtractions = echarts.init(chartExtractions);
const filters = {
regions: ["Alle", "Davos", "Engadin", "Heidiland", "St. Moritz"]
}
const cExtractionsOptions = {
tooltip: {
trigger: 'axis'
},
legend: {
data: filters.regions
},
color: sharedOptions.basic.color,
grid: sharedOptions.basic.grid,
xAxis: {
name: 'Zeitpunkt Scraping',
nameLocation: 'center',
nameGap: 24,
nameTextStyle: {
fontWeight: 'bold',
},
type: 'category',
boundaryGap: false,
data: extractionDates
},
yAxis: {
name: 'Anzahl Kurzzeitmietobjekte',
nameLocation: 'center',
nameGap: 38,
nameTextStyle: {
fontWeight: 'bold',
},
type: 'value'
},
series: [
{
name: 'Alle',
type: 'line',
stack: 'Total',
data: {!! json_encode($growth['total_all']) !!}
},
{
name: 'Heidiland',
type: 'line',
stack: 'Heidiland',
data: {!! json_encode($growth['total_heidiland']) !!}
},
{
name: 'Davos',
type: 'line',
stack: 'Davos',
data: {!! json_encode($growth['total_davos']) !!}
},
{
name: 'Engadin',
type: 'line',
stack: 'Engadin',
data: {!! json_encode($growth['total_engadin']) !!}
},
{
name: 'St. Moritz',
type: 'line',
stack: 'St. Moritz',
data: {!! json_encode($growth['total_stmoritz']) !!}
},
]
};
cExtractions.setOption(cExtractionsOptions);
const map = L.map('leaflet').setView([46.862962, 9.535296], 9);
L.tileLayer('https://tile.openstreetmap.org/{z}/{x}/{y}.png', {
maxZoom: 19,
attribution: '&copy; <a href="http://www.openstreetmap.org/copyright">OpenStreetMap</a>'
}).addTo(map);
const properties = {!! json_encode($geo) !!}
properties.forEach( prop => {
let coords = prop.coordinates.split(',');
L.marker(coords).addTo(map).bindPopup('<a href="/prop/'+prop.id+'">'+prop.coordinates+'</a>');
})
</script>
@endsection

View File

@ -0,0 +1,354 @@
@extends('base')
@section('body-class', 'property')
@section('header')
<span>Property {{ $base['property_platform_id'] }}</span><button popovertarget="prop-details"></button>
<div popover id="prop-details">
<dl>
<dt>Region</dt>
<dd>{{ $base['region_name'] }}</dd>
<dt>Zum ersten mal gefunden</dt>
<dd>{{ $base['first_found'] }}</dd>
<dt>Zum letzten mal gefunden</dt>
<dd>{{ $base['last_found'] }}</dd>
</dl>
<h2>Kurzzeitmietobjekte in der Nähe</h2>
<ul>
@foreach($neighbours as $n)
<li><a href="/prop/{{ $n['id'] }}">{{ $n['lat'] }}, {{$n['lon']}}</a></li>
@endforeach
</ul>
</div>
@endsection
@section('main')
<article style="grid-area: timeline;">
<div id="timeline"></div>
</article>
<article class="header" style="grid-area: chart1;">
<header>
<h2 id="belegung-title">
Belegung am {{ json_decode($extractiondates)[0] }}
</h2>
</header>
<div id="chart-calendar"></div>
</article>
<article class="header" style="grid-area: chart3;">
<header>
<h2>
Auslastung nach Monat am 2024-04-15T07:06:22
</h2>
</header>
<div id="chart-capacity-monthly">
</div>
</article>
<article class="header" style="grid-area: chart2;">
<header>
<h2>
Entwicklung der Verfügbarkeit
</h2>
<button popovertarget="chart-capacity-popover"></button>
<div id="chart-capacity-popover" popover>
<h2>Erkläung zum Diagramm</h2>
<p>Das Liniendiagramm zeigt, wie sich die insgesamte Verfügbarkeit des Kurzzeitmietobjekts entwickelt hat.</p>
</div>
</header>
<div id="chart-capacity"></div>
</article>
<article class="header" style="grid-area: chart4;">
<header>
<h2>
Auslastung Tage für Monat
</h2>
</header>
<div id="chart-capacity-daily">
</article>
<script type="module">
const chartTimeline = document.getElementById('timeline');
const cTimeline = echarts.init(chartTimeline);
const cTimelineOptions = {
grid: {
show: false,
},
timeline: {
data: {!! $extractiondates !!},
playInterval: 2000,
axisType: 'time',
left: 8,
right: 8,
bottom: 0,
label: {
show: false
}
}
};
cTimeline.setOption(cTimelineOptions);
const chartCapacityMonthly = document.getElementById('chart-capacity-monthly');
const cCapacityMonthly = echarts.init(chartCapacityMonthly);
const cCapacityMonthlyOptions = {
timeline: {
show: false,
data: {!! $extractiondates !!},
axisType: 'time',
},
grid: {
top: 0,
bottom: 25,
left: 70,
right: 10
},
xAxis: {
type: 'value',
max: 100
},
yAxis: {
type: 'category',
},
options: [
@foreach ($capacitiesMonthly as $cM)
{
yAxis: {
data: {!! json_encode($cM['months']) !!}
},
series: [{
type: 'bar',
data: {!! json_encode($cM['capacities']) !!}
}]
},
@endforeach
]
};
cCapacityMonthly.setOption(cCapacityMonthlyOptions);
const chartCapacityDaily = document.getElementById('chart-capacity-daily');
const cCapacityDaily = echarts.init(chartCapacityDaily);
const cCapacityDailyOptions = {
timeline: {
show: false,
data: {!! $extractiondates !!},
axisType: 'time',
},
grid: {
top: 0,
bottom: 25,
left: 70,
right: 10
},
xAxis: {
type: 'value',
max: 100
},
yAxis: {
type: 'category',
},
options: [
@foreach ($capacitiesDaily as $cD)
{
yAxis: {
data: {!! json_encode($cD['weekdays']) !!}
},
series: [{
type: 'bar',
data: {!! json_encode($cD['capacities']) !!}
}]
},
@endforeach
]
};
cCapacityDaily.setOption(cCapacityDailyOptions);
const chartCapacity = document.getElementById('chart-capacity');
const cCapacity = echarts.init(chartCapacity);
const cCapacityOptions = {
tooltip: {
trigger: 'axis',
formatter: 'Datum Scraping: {b}<br />Verfügbarkeit: {c} %'
},
grid: {
top: 20,
left: 25,
right: 10,
bottom: 20,
containLabel: true
},
xAxis: {
type: 'category',
boundaryGap: false,
data: {!! json_encode($capacities['dates']) !!},
name: 'Zeitpunkt Scraping',
nameLocation: 'center',
nameGap: 24,
nameTextStyle: {
fontWeight: 'bold',
}
},
yAxis: {
type: 'value',
min: 0,
max: 100,
name: 'Auslastung in Prozent',
nameLocation: 'center',
nameGap: 38,
nameTextStyle: {
fontWeight: 'bold',
}
},
series: [{
name: 'Auslastung',
type: 'line',
symbolSize: 7,
data: {!! json_encode($capacities['capacities']) !!}
},
{
name: 'Auslastung Region',
type: 'line',
symbolSize: 7,
data: {!! json_encode($capacities['capacities']) !!}
}]
};
cCapacity.setOption(cCapacityOptions);
const chartCalendar = document.getElementById('chart-calendar');
const cCalendar = echarts.init(chartCalendar);
const h2Belegung = document.getElementById('belegung-title');
const cCalendarOptions = {
timeline: {
show: false,
data: {!! $extractiondates !!},
axisType: 'time',
},
visualMap: {
categories: [0,1,2],
inRange: {
color: ['#d95f02', '#7570b3', '#1b9e77']
},
formatter: (cat) => {
switch (cat) {
case 0:
return 'Ausgebucht';
case 1:
return 'Verfügbar (kein Anreisetag)';
case 2:
return 'Verfügbar';
}
},
type: 'piecewise',
orient: 'horizontal',
left: 'center',
top: 0
},
calendar:[
{
orient: 'horizontal',
range: '2024',
top: '15%',
right: 10,
bottom: '65%',
left: 50,
},
{
orient: 'horizontal',
range: '2025',
top: '47%',
right: 10,
bottom: '33%',
left: 50,
},
{
orient: 'horizontal',
range: '2026',
top: '79%',
right: 10,
bottom: '1%',
left: 50,
}
],
options: [
@foreach ($calendar as $c)
{
series: [{
type: 'heatmap',
coordinateSystem: 'calendar',
calendarIndex: 0,
data: {!! json_encode($c) !!}
},
{
type: 'heatmap',
coordinateSystem: 'calendar',
calendarIndex: 1,
data: {!! json_encode($c) !!}
},
{
type: 'heatmap',
coordinateSystem: 'calendar',
calendarIndex: 2,
data: {!! json_encode($c) !!}
}]
},
@endforeach
]
};
cCalendar.setOption(cCalendarOptions);
cTimeline.on('timelinechanged', (e) => {
h2Belegung.innerText = "Belegung am "+cCalendarOptions.timeline.data[e.currentIndex];
// Set markpoint on linechart
let x = cCapacityOptions.xAxis.data[e.currentIndex];
let y = cCapacityOptions.series[0].data[e.currentIndex];
cCapacityMonthly.dispatchAction({
type: 'timelineChange',
currentIndex: e.currentIndex
});
cCapacityDaily.dispatchAction({
type: 'timelineChange',
currentIndex: e.currentIndex
});
cCalendar.dispatchAction({
type: 'timelineChange',
currentIndex: e.currentIndex
});
cCapacity.setOption({
series: {
markPoint: {
data: [{
coord: [x, y]
}]
}
}
});
})
cCapacity.on('click', 'series', (e) => {
// Switch to correct calendar in the timeline
cTimeline.dispatchAction({
type: 'timelineChange',
currentIndex: e.dataIndex
});
});
</script>
@endsection

View File

@ -0,0 +1,8 @@
<?php
use Illuminate\Foundation\Inspiring;
use Illuminate\Support\Facades\Artisan;
Artisan::command('inspire', function () {
$this->comment(Inspiring::quote());
})->purpose('Display an inspiring quote')->hourly();

72
dashboard/routes/web.php Normal file
View File

@ -0,0 +1,72 @@
<?php
use Illuminate\Support\Facades\Route;
use App\Api;
Route::get('/', function () {
$regionPropertyCapacities = Api::regionPropertyCapacities(-1);
$propertiesGrowth = Api::propertiesGrowth();
$propsPerRegion = Api::propertiesPerRegion();
$propsPerRegionName = [];
$propsPerRegionCounts = [];
foreach ($propsPerRegion as $el) {
$propsPerRegionName[] = $el['name'];
$propsPerRegionCounts[] = $el['count_properties'];
}
$propertiesGeo = Api::propertiesGeo();
return view('overview', ["regionPropertiesCapacities" => $regionPropertyCapacities, "geo" => $propertiesGeo, "growth" => $propertiesGrowth, "propsPerRegion" => [json_encode($propsPerRegionName), json_encode($propsPerRegionCounts)]]);
});
Route::get('/prop/{id}', function (int $id) {
$propertyBase = Api::propertyBase($id);
$extractions = Api::propertyExtractions($id);
$propertyCapacities = Api::propertyCapacities($id);
$propertyNeighbours = Api::propertyNeighbours($id);
//$regionCapacities = Api::regionCapacities(-1);
$regionCapacities = [];
$propertyCapacitiesMonthly = [];
$propertyCapacitiesDaily = [];
foreach ($extractions as $extraction) {
$propertyCapacitiesMonthly[] = Api::propertyCapacitiesMonthly($id, $extraction['created_at']);
$propertyCapacitiesDaily[] = Api::propertyCapacitiesDaily($id, $extraction['created_at']);
}
$data = [];
$dates = [];
foreach ($extractions as $ext) {
$series = [];
$dates[] = $ext['created_at'];
$extCalendar = json_decode($ext['calendar'], 1);
foreach ($extCalendar as $date => $status) {
$series[] = [$date, $status];
}
$data[] = $series;
}
return view('property', ['base' => $propertyBase[0], "extractiondates" => json_encode($dates), "calendar" => $data, 'capacities' => $propertyCapacities, 'capacitiesMonthly' => $propertyCapacitiesMonthly, 'capacitiesDaily' => $propertyCapacitiesDaily, 'regionCapacities' => $regionCapacities, 'neighbours' => $propertyNeighbours]);
});
Route::get('/region/{id}', function (int $id) {
$regionCapacities = Api::regionCapacities($id);
dump($regionCapacities);
return view('region', ['capacities' => $regionCapacities]);
});

4
dashboard/storage/app/.gitignore vendored Normal file
View File

@ -0,0 +1,4 @@
*
!private/
!public/
!.gitignore

View File

@ -0,0 +1,2 @@
*
!.gitignore

View File

@ -0,0 +1,2 @@
*
!.gitignore

View File

@ -0,0 +1,9 @@
compiled.php
config.php
down
events.scanned.php
maintenance.php
routes.php
routes.scanned.php
schedule-*
services.json

View File

@ -0,0 +1,3 @@
*
!data/
!.gitignore

View File

@ -0,0 +1,2 @@
*
!.gitignore

View File

@ -0,0 +1,2 @@
*
!.gitignore

View File

@ -0,0 +1,2 @@
*
!.gitignore

View File

@ -0,0 +1,2 @@
*
!.gitignore

2
dashboard/storage/logs/.gitignore vendored Normal file
View File

@ -0,0 +1,2 @@
*
!.gitignore

View File

@ -0,0 +1,20 @@
import defaultTheme from 'tailwindcss/defaultTheme';
/** @type {import('tailwindcss').Config} */
export default {
content: [
'./vendor/laravel/framework/src/Illuminate/Pagination/resources/views/*.blade.php',
'./storage/framework/views/*.php',
'./resources/**/*.blade.php',
'./resources/**/*.js',
'./resources/**/*.vue',
],
theme: {
extend: {
fontFamily: {
sans: ['Figtree', ...defaultTheme.fontFamily.sans],
},
},
},
plugins: [],
};

View File

@ -0,0 +1,19 @@
<?php
namespace Tests\Feature;
// use Illuminate\Foundation\Testing\RefreshDatabase;
use Tests\TestCase;
class ExampleTest extends TestCase
{
/**
* A basic test example.
*/
public function test_the_application_returns_a_successful_response(): void
{
$response = $this->get('/');
$response->assertStatus(200);
}
}

View File

@ -0,0 +1,10 @@
<?php
namespace Tests;
use Illuminate\Foundation\Testing\TestCase as BaseTestCase;
abstract class TestCase extends BaseTestCase
{
//
}

View File

@ -0,0 +1,16 @@
<?php
namespace Tests\Unit;
use PHPUnit\Framework\TestCase;
class ExampleTest extends TestCase
{
/**
* A basic test example.
*/
public function test_that_true_is_true(): void
{
$this->assertTrue(true);
}
}

11
dashboard/vite.config.js Normal file
View File

@ -0,0 +1,11 @@
import { defineConfig } from 'vite';
import laravel from 'laravel-vite-plugin';
export default defineConfig({
plugins: [
laravel({
input: ['resources/css/app.css', 'resources/js/app.js'],
refresh: true,
}),
],
});

View File

@ -0,0 +1,31 @@
<mxfile host="app.diagrams.net" agent="Mozilla/5.0 (X11; Linux x86_64; rv:133.0) Gecko/20100101 Firefox/133.0" version="25.0.3">
<diagram name="Seite-1" id="5abS_fUiar5VuBZXZINZ">
<mxGraphModel dx="1195" dy="1534" grid="1" gridSize="10" guides="1" tooltips="1" connect="1" arrows="1" fold="1" page="1" pageScale="1" pageWidth="827" pageHeight="1169" math="0" shadow="0">
<root>
<mxCell id="0" />
<mxCell id="1" parent="0" />
<object placeholders="1" c4Name="REST-API" c4Type="Python (FastAPI)" c4Description="REST Schnittstelle" label="&lt;font style=&quot;font-size: 16px&quot;&gt;&lt;b&gt;%c4Name%&lt;/b&gt;&lt;/font&gt;&lt;div&gt;[%c4Type%]&lt;/div&gt;&lt;br&gt;&lt;div&gt;&lt;font style=&quot;font-size: 11px&quot;&gt;&lt;font color=&quot;#cccccc&quot;&gt;%c4Description%&lt;/font&gt;&lt;/div&gt;" id="DRD_0cKAZXVdgcTgqyKr-1">
<mxCell style="rounded=1;whiteSpace=wrap;html=1;labelBackgroundColor=none;fillColor=#1061B0;fontColor=#ffffff;align=center;arcSize=10;strokeColor=#0D5091;metaEdit=1;resizable=0;points=[[0.25,0,0],[0.5,0,0],[0.75,0,0],[1,0.25,0],[1,0.5,0],[1,0.75,0],[0.75,1,0],[0.5,1,0],[0.25,1,0],[0,0.75,0],[0,0.5,0],[0,0.25,0]];" vertex="1" parent="1">
<mxGeometry x="360" y="40" width="240" height="120" as="geometry" />
</mxCell>
</object>
<mxCell id="DRD_0cKAZXVdgcTgqyKr-5" value="" style="edgeStyle=orthogonalEdgeStyle;rounded=0;orthogonalLoop=1;jettySize=auto;html=1;" edge="1" parent="1" source="DRD_0cKAZXVdgcTgqyKr-2" target="DRD_0cKAZXVdgcTgqyKr-1">
<mxGeometry relative="1" as="geometry" />
</mxCell>
<object placeholders="1" c4Name="Data" c4Type="Python (Polars)" c4Description="Eigenes Python Package. Enthält Programmcode für das ETL" label="&lt;font style=&quot;font-size: 16px&quot;&gt;&lt;b&gt;%c4Name%&lt;/b&gt;&lt;/font&gt;&lt;div&gt;[%c4Type%]&lt;/div&gt;&lt;br&gt;&lt;div&gt;&lt;font style=&quot;font-size: 11px&quot;&gt;&lt;font color=&quot;#cccccc&quot;&gt;%c4Description%&lt;/font&gt;&lt;/div&gt;" id="DRD_0cKAZXVdgcTgqyKr-2">
<mxCell style="rounded=1;whiteSpace=wrap;html=1;labelBackgroundColor=none;fillColor=#1061B0;fontColor=#ffffff;align=center;arcSize=10;strokeColor=#0D5091;metaEdit=1;resizable=0;points=[[0.25,0,0],[0.5,0,0],[0.75,0,0],[1,0.25,0],[1,0.5,0],[1,0.75,0],[0.75,1,0],[0.5,1,0],[0.25,1,0],[0,0.75,0],[0,0.5,0],[0,0.25,0]];" vertex="1" parent="1">
<mxGeometry x="40" y="40" width="240" height="120" as="geometry" />
</mxCell>
</object>
<object placeholders="1" c4Name="Datenbank" c4Type="Container" c4Technology="DuckDB" c4Description="Datenbank, welches die aggregierten Daten enthält." label="&lt;font style=&quot;font-size: 16px&quot;&gt;&lt;b&gt;%c4Name%&lt;/b&gt;&lt;/font&gt;&lt;div&gt;[%c4Type%:&amp;nbsp;%c4Technology%]&lt;/div&gt;&lt;br&gt;&lt;div&gt;&lt;font style=&quot;font-size: 11px&quot;&gt;&lt;font color=&quot;#E6E6E6&quot;&gt;%c4Description%&lt;/font&gt;&lt;/div&gt;" id="DRD_0cKAZXVdgcTgqyKr-3">
<mxCell style="shape=cylinder3;size=15;whiteSpace=wrap;html=1;boundedLbl=1;rounded=0;labelBackgroundColor=none;fillColor=#23A2D9;fontSize=12;fontColor=#ffffff;align=center;strokeColor=#0E7DAD;metaEdit=1;points=[[0.5,0,0],[1,0.25,0],[1,0.5,0],[1,0.75,0],[0.5,1,0],[0,0.75,0],[0,0.5,0],[0,0.25,0]];resizable=0;" vertex="1" parent="1">
<mxGeometry x="40" y="240" width="240" height="120" as="geometry" />
</mxCell>
</object>
<mxCell id="DRD_0cKAZXVdgcTgqyKr-4" style="edgeStyle=orthogonalEdgeStyle;rounded=0;orthogonalLoop=1;jettySize=auto;html=1;exitX=0.5;exitY=1;exitDx=0;exitDy=0;exitPerimeter=0;entryX=0.5;entryY=0;entryDx=0;entryDy=0;entryPerimeter=0;" edge="1" parent="1" source="DRD_0cKAZXVdgcTgqyKr-2" target="DRD_0cKAZXVdgcTgqyKr-3">
<mxGeometry relative="1" as="geometry" />
</mxCell>
</root>
</mxGraphModel>
</diagram>
</mxfile>

4
etl/README.md Normal file
View File

@ -0,0 +1,4 @@
# How to run
```bash
fastapi dev api/main.py --port 8080
```

View File

@ -2200,7 +2200,7 @@ packages:
name: consultancy-2
version: 0.1.0
path: .
sha256: 390e1115c19758a67a2876388f5a8fe69abc3609e68910e50ccb86a558ee67ee
sha256: c09f63486f0dd4151008de68ef73d00f72663dc3cc47894ff750d517f898a23b
requires_python: '>=3.11'
editable: true
- kind: conda

View File

@ -1,7 +1,6 @@
[project]
authors = [{name = "Giò Diani", email = "mail@gionathandiani.name"}]
dependencies = []
description = "Add a short description here"
authors = [{name = "Giò Diani", email = "mail@gionathandiani.name"}, {name = "Mauro Stoffel", email = "mauro.stoffel@stud.fhgr.ch"}, {name = "Colin Bolli", email = "colin.bolli@stud.fhgr.ch"}, {name = "Charles Winkler", email = "charles.winkler@stud.fhgr.ch"}]
description = "Datenauferbeitung"
name = "consultancy_2"
requires-python = ">= 3.11"
version = "0.1.0"

82
etl/src/api/main.py Normal file
View File

@ -0,0 +1,82 @@
import data
import polars as pl
from data import etl_property_capacities as etl_pc
from data import etl_property_capacities_monthly as etl_pcm
from data import etl_property_capacities_weekdays as etl_pcw
from data import etl_property_neighbours as etl_pn
from data import etl_region_capacities as etl_rc
from data import etl_region_properties_capacities as etl_rpc
from data import etl_region_capacities_comparison as etl_rcc
from fastapi import FastAPI, Response
d = data.load()
app = FastAPI()
@app.get("/")
def read_root():
return {"Hi there!"}
@app.get("/items/{item_id}")
def read_item(item_id: int):
ext = d.extractions_for(item_id).pl()
out = ext.with_columns(pl.col("calendar").str.extract_all(r"([0-9]{4}-[0-9]{2}-[0-9]{2})|[0-2]").alias("calendar_data"))
out = out.drop(['calendar', 'property_id'])
return Response(content=out.write_json(), media_type="application/json")
@app.get("/region/properties")
def properties_region():
return d.properties_per_region().pl().to_dicts()
@app.get("/properties/growth")
def properties_growth():
options = {"dates" : d.properties_growth().pl()['date'].to_list(), "total_all" : d.properties_growth().pl()['total_all'].to_list(), "total_heidiland" : d.properties_growth().pl()['total_heidiland'].to_list(), "total_engadin" : d.properties_growth().pl()['total_engadin'].to_list(), "total_davos" : d.properties_growth().pl()['total_davos'].to_list(), "total_stmoritz" : d.properties_growth().pl()['total_stmoritz'].to_list()}
return options
@app.get("/properties/geo")
def properties_geo():
return d.properties_geo().pl().to_dicts()
@app.get("/property/{id}/neighbours")
def property_neighbours(id: int):
capacities = etl_pn.property_neighbours(id)
return capacities
@app.get("/property/{id}/extractions")
def property_extractions(id: int):
return d.extractions_for(property_id = id).pl().to_dicts()
@app.get("/property/{id}/capacities")
def property_capacities_data(id: int):
capacities = etl_pc.property_capacities(id)
return capacities
@app.get("/property/{id}/capacities/monthly/{scrapeDate}")
def property_capacities_data(id: int, scrapeDate: str):
capacities = etl_pcm.property_capacities_monthly(id, scrapeDate)
return capacities
@app.get("/property/{id}/capacities/weekdays/{scrapeDate}")
def property_capacities_data(id: int, scrapeDate: str):
capacities = etl_pcw.property_capacities_weekdays(id, scrapeDate)
return capacities
@app.get("/property/{id}/base")
def property_base_data(id: int):
return d.property_base_data(id).pl().to_dicts()
@app.get("/region/{id}/properties/capacities")
def region_property_capacities_data(id: int):
capacities = etl_rpc.region_properties_capacities(id)
return capacities
@app.get("/region/{id}/capacities")
def region_capacities_data(id: int):
capacities = etl_rc.region_capacities(id)
return capacities
@app.get("/region/capacities/comparison/{id_1}/{id_2}")
def region_capacities_data(id_1: int, id_2: int):
capacities = etl_rcc.region_capacities_comparison(id_1, id_2)
return capacities

View File

View File

@ -28,8 +28,6 @@ class Database:
if(spatial_installed and not spatial_installed[0]):
self.connection.sql("INSTALL spatial")
def db_overview(self):
return self.connection.sql("DESCRIBE;").show()
@ -46,13 +44,93 @@ class Database:
def properties_growth(self):
return self.connection.sql("""
WITH PropertiesALL AS (
SELECT
strftime(created_at, '%Y-%m-%d') AS date,
COUNT(*) as properties_count,
SUM(properties_count) OVER (ORDER BY date) AS total
FROM
consultancy_d.properties p
GROUP BY
date
ORDER BY
date
),
PropertiesR1 AS (
SELECT
strftime(created_at, '%Y-%m-%d') AS date,
COUNT(*) as properties_count,
SUM(properties_count) OVER (ORDER BY date) AS total
FROM
consultancy_d.properties p
WHERE
p.seed_id = 1
GROUP BY
date
ORDER BY
date
),
PropertiesR2 AS (
SELECT
strftime(created_at, '%Y-%m-%d') AS date,
COUNT(*) as properties_count,
SUM(properties_count) OVER (ORDER BY date) AS total
FROM
consultancy_d.properties p
WHERE
p.seed_id = 2
GROUP BY
date
ORDER BY
date
),
PropertiesR3 AS (
SELECT
strftime(created_at, '%Y-%m-%d') AS date,
COUNT(*) as properties_count,
SUM(properties_count) OVER (ORDER BY date) AS total
FROM
consultancy_d.properties p
WHERE
p.seed_id = 3
GROUP BY
date
ORDER BY
date
),
PropertiesR4 AS (
SELECT
strftime(created_at, '%Y-%m-%d') AS date,
COUNT(*) as properties_count,
SUM(properties_count) OVER (ORDER BY date) AS total
FROM
consultancy_d.properties p
WHERE
p.seed_id = 4
GROUP BY
date
ORDER BY
date
)
SELECT
strftime(created_at, '%Y-%m-%d') AS date,
COUNT(*) as properties_count
p.date,
p.total AS total_all,
pR1.total as total_heidiland,
pR2.total AS total_davos,
pR3.total AS total_engadin,
pR4.total AS total_stmoritz
FROM
consultancy_d.properties
GROUP BY
date;
PropertiesAll p
LEFT JOIN
PropertiesR1 pR1 ON p.date = pR1.date
LEFT JOIN
PropertiesR2 pR2 ON p.date = pR2.date
LEFT JOIN
PropertiesR3 pR3 ON p.date = pR3.date
LEFT JOIN
PropertiesR4 pR4 ON p.date = pR4.date
ORDER BY
p.date
""")
def properties_per_region(self):
@ -69,6 +147,20 @@ class Database:
GROUP BY
properties.seed_id,
regions.name
ORDER BY
count_properties ASC
""")
def propIds_with_region(self):
return self.connection.sql("""
SELECT
properties.id, seed_id, regions.name
FROM
consultancy_d.properties
LEFT JOIN
consultancy_d.seeds ON seeds.id = properties.seed_id
LEFT JOIN
consultancy_d.regions ON regions.id = seeds.region_id
""")
def properties_unreachable(self):
@ -196,7 +288,7 @@ class Database:
""")
def extractions(self):
return self.connection.sql(f"""
return self.connection.sql("""
SELECT
JSON_EXTRACT(body, '$.content.days') as calendar,
property_id,
@ -209,19 +301,54 @@ class Database:
property_id
""")
def extractions_with_region(self):
return self.connection.sql("""
SELECT
JSON_EXTRACT(body, '$.content.days') as calendar,
extractions.property_id,
extractions.created_at,
properties.seed_id,
regions.name
FROM
consultancy_d.extractions
LEFT JOIN
consultancy_d.properties ON properties.id = extractions.property_id
LEFT JOIN
consultancy_d.seeds ON seeds.id = properties.seed_id
LEFT JOIN
consultancy_d.regions ON regions.id = seeds.region_id
""")
def extractions_for(self, property_id):
return self.connection.sql(f"""
SELECT
JSON_EXTRACT(body, '$.content.days') as calendar,
property_id,
created_at
FROM
consultancy_d.extractions
WHERE
type == 'calendar' AND
property_id = {property_id}
property_id = {property_id} AND
calendar NOT NULL
ORDER BY
property_id
created_at
""")
def extractions_propId_scrapeDate(self, property_id: int, scrape_date: str):
return self.connection.sql(f"""
SELECT
JSON_EXTRACT(body, '$.content.days') as calendar,
created_at
FROM
consultancy_d.extractions
WHERE
type == 'calendar' AND
property_id = {property_id} AND
calendar NOT NULL AND
created_at >= '{scrape_date}'
ORDER BY
created_at
LIMIT 1
""")
# Anzahl der extrahierten properties pro Exktraktionsvorgang
@ -267,3 +394,83 @@ class Database:
ORDER BY property_id
""")
def property_base_data(self, id):
return self.connection.sql(f"""
SELECT
p.property_platform_id,
p.created_at as first_found,
p.last_found,
r.name as region_name
FROM
consultancy_d.properties p
INNER JOIN consultancy_d.seeds s ON s.id = p.seed_id
INNER JOIN consultancy_d.regions r ON s.region_id = r.id
WHERE
p.id = {id}
""")
def properties_geo(self):
return self.connection.sql("""
SELECT
p.id,
p.check_data as coordinates
FROM
consultancy_d.properties p
""")
def properties_geo_seeds(self):
return self.connection.sql("""
SELECT
p.id,
p.seed_id,
p.check_data as coordinates
FROM
consultancy_d.properties p
""")
def capacity_of_region(self, region_id):
return self.connection.sql(f"""
SELECT
JSON_EXTRACT(body, '$.content.days') as calendarBody,
strftime(extractions.created_at, '%Y-%m-%d') AS ScrapeDate,
extractions.property_id,
FROM
consultancy_d.extractions
LEFT JOIN
consultancy_d.properties ON properties.id = extractions.property_id
WHERE
type == 'calendar' AND
properties.seed_id = {region_id}
""")
def capacity_global(self):
return self.connection.sql(f"""
SELECT
JSON_EXTRACT(body, '$.content.days') as calendarBody,
strftime(extractions.created_at, '%Y-%m-%d') AS ScrapeDate,
extractions.property_id,
FROM
consultancy_d.extractions
LEFT JOIN
consultancy_d.properties ON properties.id = extractions.property_id
WHERE
type == 'calendar'
""")
def capacity_comparison_of_region(self, region_id_1, region_id_2):
return self.connection.sql(f"""
SELECT
JSON_EXTRACT(body, '$.content.days') as calendarBody,
strftime(extractions.created_at, '%Y-%m-%d') AS ScrapeDate,
extractions.property_id,
properties.seed_id
FROM
consultancy_d.extractions
LEFT JOIN
consultancy_d.properties ON properties.id = extractions.property_id
WHERE
type == 'calendar' AND
(properties.seed_id = {region_id_1} OR
properties.seed_id = {region_id_2})
""")

View File

@ -0,0 +1,39 @@
from io import StringIO
import polars as pl
import data
d = data.load()
def property_capacities(id: int):
extractions = d.extractions_for(id).pl()
df_dates = pl.DataFrame()
for row in extractions.rows(named=True):
df_calendar = pl.read_json(StringIO(row['calendar']))
#df_calendar.insert_column(0, pl.Series("created_at", [row['created_at']]))
df_dates = pl.concat([df_calendar, df_dates], how="diagonal")
# order = sorted(df_dates.columns)
# df_dates = df_dates.select(order)
sum_hor = df_dates.sum_horizontal()
#print(sum_hor)
# Get the available dates per extraction
count_days = []
for dates in df_dates.rows():
# Remove all None values
liste = [x for x in dates if x is not None]
count_days.append(len(liste))
counts = pl.DataFrame({"count_days" : count_days, "sum" : sum_hor})
result = {"capacities": [], "dates": extractions['created_at'].cast(pl.Datetime).to_list() }
for row in counts.rows(named=True):
max_capacity = row['count_days'] * 2
max_capacity_perc = 100 / max_capacity
result['capacities'].append(round(max_capacity_perc * row['sum'], 2))
result['capacities'].reverse()
return result

View File

@ -0,0 +1,27 @@
from io import StringIO
import polars as pl
import data
d = data.load()
def property_capacities_monthly(id: int, scrapeDate: str):
extractions = d.extractions_propId_scrapeDate(id, scrapeDate).pl()
df_calendar = pl.DataFrame()
for row in extractions.rows(named=True):
scrapeDate = row['created_at']
df_calendar = pl.read_json(StringIO(row['calendar']))
columnTitles = df_calendar.columns
df_calendar = df_calendar.transpose()
df_calendar = df_calendar.with_columns(pl.Series(name="dates", values=columnTitles))
df_calendar = df_calendar.with_columns((pl.col("dates").str.to_date()))
df_calendar = df_calendar.with_columns((pl.col("dates").dt.strftime("%b") + " " + (pl.col("dates").dt.strftime("%Y"))).alias('date_short'))
df_calendar = df_calendar.with_columns((pl.col("dates").dt.strftime("%Y") + " " + (pl.col("dates").dt.strftime("%m"))).alias('dates'))
df_calendar = df_calendar.group_by(['dates', 'date_short']).agg(pl.col("column_0").sum())
df_calendar = df_calendar.sort('dates')
df_calendar = df_calendar.drop('dates')
result = {"scraping-date": scrapeDate, "months": df_calendar['date_short'].to_list(), 'capacities': df_calendar['column_0'].to_list()}
return result

View File

@ -0,0 +1,33 @@
from io import StringIO
import polars as pl
import data
d = data.load()
def property_capacities_weekdays(id: int, scrapeDate: str):
extractions = d.extractions_propId_scrapeDate(id, scrapeDate).pl()
weekdays = ['Monday', 'Tuesday', 'Wednesday', 'Thursday', 'Friday', 'Saturday', 'Sunday']
df_calendar = pl.DataFrame()
numWeeks = 0
for row in extractions.rows(named=True):
scrapeDate = row['created_at']
df_calendar = pl.read_json(StringIO(row['calendar']))
columnTitles = df_calendar.columns
df_calendar = df_calendar.transpose()
df_calendar = df_calendar.with_columns(pl.Series(name="dates", values=columnTitles))
df_calendar = df_calendar.with_columns((pl.col("dates").str.to_date()))
numWeeks = round((df_calendar.get_column("dates").max() - df_calendar.get_column("dates").min()).days / 7, 0)
df_calendar = df_calendar.with_columns(pl.col("dates").dt.weekday().alias("weekday_num"))
df_calendar = df_calendar.with_columns(pl.col("dates").dt.strftime("%A").alias("weekday"))
df_calendar = df_calendar.drop("dates")
df_calendar = df_calendar.group_by(["weekday", "weekday_num"]).agg(pl.col("column_0").sum())
df_calendar = df_calendar.with_columns((pl.col("column_0") / numWeeks * 100).alias("column_0"))
df_calendar = df_calendar.sort('weekday_num')
df_calendar = df_calendar.drop('weekday_num')
result = {"scraping-date": scrapeDate, "weekdays": df_calendar['weekday'].to_list(), 'capacities': df_calendar['column_0'].to_list()}
return result

View File

@ -0,0 +1,66 @@
import polars as pl
from math import radians, cos, sin, asin, sqrt, degrees, atan2
import data
d = data.load()
def calcHaversinDistance(latMain, lonMain, lat, lon):
R = 6371
# convert decimal degrees to radians
latMain, lonMain, lat, lon = map(radians, [latMain, lonMain, lat, lon])
# haversine formula
dlon = lonMain - lon
dlat = latMain - lat
a = sin(dlat / 2) ** 2 + cos(lat) * cos(latMain) * sin(dlon / 2) ** 2
c = 2 * asin(sqrt(a)) # 2 * atan2(sqrt(a), sqrt(1-a))
d = R * c
return d
def property_neighbours(id: int):
extractions = d.properties_geo_seeds().pl()
# Get lat, long and region from main property
latMain, lonMain = extractions.filter(pl.col('id') == str(id))['coordinates'][0].split(',')
latMain, lonMain = map(float, [latMain, lonMain])
region = extractions.filter(pl.col('id') == str(id))['seed_id'][0]
# Prefilter the dataframe to only the correct region
extractions = extractions.filter(pl.col('seed_id') == str(region))
extractions = extractions.drop('seed_id')
# Remove main property from DF
extractions = extractions.filter(pl.col('id') != str(id))
# Split coordinate into lat and lon
#extractions = extractions.with_columns((pl.col('coordinates').str.split(','))[0].alias("coordinates")).unnest("fields")
extractions = extractions.with_columns(pl.col("coordinates").str.split_exact(",", 1).struct.rename_fields(["lat", "lon"]).alias("lat/lon")).unnest("lat/lon")
extractions = extractions.drop('coordinates')
extractions = extractions.with_columns(pl.col("lat").cast(pl.Float32))
extractions = extractions.with_columns(pl.col("lon").cast(pl.Float32))
# Calculate distances
distances = []
for row in extractions.rows(named=True):
lat = row['lat']
lon = row['lon']
dist = calcHaversinDistance(latMain, lonMain, lat, lon)
distances.append(dist)
# Add distance to DF
extractions = extractions.with_columns(pl.Series(name="distances", values=distances))
# Sort for distance and give only first 10
extractions = extractions.sort("distances").head(10)
extractions = extractions.drop('distances')
#result = {"ids": extractions['id'].to_list(), "lat": extractions['lat'].to_list(), "lon": extractions['lon'].to_list()}
result = extractions.to_dicts()
return result

View File

@ -0,0 +1,53 @@
from io import StringIO
from datetime import date
import polars as pl
import data
d = data.load()
def region_capacities(id: int):
# Get Data
if id == -1:
extractions = d.capacity_global().pl()
else:
extractions = d.capacity_of_region(id).pl()
# turn PropertyIDs to ints for sorting
extractions = extractions.cast({"property_id": int})
extractions.drop('property_id')
df_dates = pl.DataFrame()
# Get Data from JSON
gridData = []
dayCounts = []
for row in extractions.rows(named=True):
# Return 0 for sum if calendar is null
if row['calendarBody']:
calDF = pl.read_json(StringIO(row['calendarBody']))
sum_hor = calDF.sum_horizontal()[0]
else:
sum_hor = 0
gridData.append([row['ScrapeDate'], sum_hor, calDF.width])
# Create Aggregates of values
df = pl.DataFrame(gridData)
df_count = df.group_by("column_0").agg(pl.col("column_1").count())
df_sum = df.group_by("column_0").agg(pl.col("column_1").sum())
df_numDays = df.group_by("column_0").agg(pl.col("column_2").max())
# Join and rename DF's
df = df_sum.join(df_count, on= 'column_0').join(df_numDays, on= 'column_0')
df = df.rename({"column_0": "ScrapeDate", "column_1": "Sum", "column_1_right": "num_properties", "column_2": "max_value", })
# Calculate normed capacities for each scrapeDate
df = df.with_columns((pl.col("Sum") / pl.col("num_properties") / (pl.col("max_value")*2) * 100).alias("capacity"))
# Sort the date column
df = df.cast({"ScrapeDate": date})
df = df.sort('ScrapeDate')
result = {"capacities": df['capacity'].to_list(), "dates": df['ScrapeDate'].to_list()}
return result

View File

@ -0,0 +1,68 @@
import data
import polars as pl
from io import StringIO
import numpy as np
d = data.load()
def region_capacities_comparison(id_1: int, id_2: int):
fulldf = d.capacity_comparison_of_region(id_1, id_2).pl()
# turn PropertyIDs and seedIDs to ints for sorting and filtering
fulldf = fulldf.cast({"property_id": int})
fulldf = fulldf.cast({"seed_id": int})
df_region1 = fulldf.filter(pl.col("seed_id") == id_1)
df_region2 = fulldf.filter(pl.col("seed_id") == id_2)
df_list = [df_region1, df_region2]
outDictList = []
for df in df_list:
# Get uniques for dates and propIDs and sort them
listOfDates = df.get_column("ScrapeDate").unique().sort()
listOfPropertyIDs = df.get_column("property_id").unique().sort()
# Create DFs from lists to merge later
datesDF = pl.DataFrame(listOfDates).with_row_index("date_index")
propIdDF = pl.DataFrame(listOfPropertyIDs).with_row_index("prop_index")
# Merge Dataframe to generate indices
df = df.join(datesDF, on='ScrapeDate')
df = df.join(propIdDF, on='property_id')
# Drop now useless columns ScrapeDate and property_id
df = df[['ScrapeDate', 'calendarBody', 'date_index', 'prop_index']]
# Calculate grid values
gridData = []
for row in df.rows(named=True):
# Return 0 for sum if calendar is null
if row['calendarBody']:
calDF = pl.read_json(StringIO(row['calendarBody']))
sum_hor = calDF.sum_horizontal()[0]
else:
sum_hor = 0
# With Index
# gridData.append([row['prop_index'], row['date_index'], sum_hor])
# With ScrapeDate
gridData.append([row['ScrapeDate'], row['date_index'], sum_hor])
gridData = np.array(gridData)
# get all values to calculate Max
allValues = gridData[:, 2].astype(int)
maxValue = np.max(allValues)
gridData[:, 2] = (allValues*100)/maxValue
# Return back to list
gridData = gridData.tolist()
# Cast listOfDates to datetime
listOfDates = listOfDates.cast(pl.Date).to_list()
listOfPropertyIDs = listOfPropertyIDs.to_list()
# Create JSON
tempDict = {'scrapeDates': listOfDates, 'property_ids': listOfPropertyIDs, 'values': gridData}
outDictList.append(tempDict)
outDict = {'region1': outDictList[0], 'region2': outDictList[1],}
return outDict
out = region_capacities_comparison(1,2)
print(out)

View File

@ -0,0 +1,57 @@
from io import StringIO
import polars as pl
import data
d = data.load()
def region_properties_capacities(id: int):
# Get Data
if id == -1:
df = d.capacity_global().pl()
else:
df = d.capacity_of_region(id).pl()
# turn PropertyIDs to ints for sorting
df = df.cast({"property_id": int})
# Get uniques for dates and propIDs and sort them
listOfDates = df.get_column("ScrapeDate").unique().sort()
listOfPropertyIDs = df.get_column("property_id").unique().sort()
# Create DFs from lists to merge later
datesDF = pl.DataFrame(listOfDates).with_row_index("date_index")
propIdDF = pl.DataFrame(listOfPropertyIDs).with_row_index("prop_index")
# Merge Dataframe to generate indices
df = df.join(datesDF, on='ScrapeDate')
df = df.join(propIdDF, on='property_id')
# Calculate grid values
gridData = pl.DataFrame(schema=[("scrape_date", pl.String), ("property_id", pl.String), ("sum_hor", pl.Int64)])
for row in df.rows(named=True):
# Return 0 for sum if calendar is null
if row['calendarBody']:
calDF = pl.read_json(StringIO(row['calendarBody']))
sum_hor = calDF.sum_horizontal()[0]
else:
sum_hor = 0
gridData = gridData.vstack(pl.DataFrame({"scrape_date" : row['ScrapeDate'], "property_id": str(row['property_id']), "sum_hor": sum_hor}))
# get the overall maximum sum
maxValue = gridData['sum_hor'].max()
values = []
for row in gridData.rows(named=True):
capacity = (row['sum_hor']*100)/maxValue
values.append((row['scrape_date'], row['property_id'], capacity))
# Cast listOfDates to datetime
listOfDates = listOfDates.cast(pl.Date).to_list()
listOfPropertyIDs = listOfPropertyIDs.cast(pl.String).to_list()
# Create JSON
outDict = {'scrapeDates': listOfDates, 'property_ids': listOfPropertyIDs, 'values': values}
return outDict

View File

@ -0,0 +1,121 @@
from etl.src import data
import json
import polars as pl
from datetime import datetime
import matplotlib.pyplot as plt
import numpy as np
'''
# Get Data from DB
inst = data.load()
df = inst.extractions_with_region().pl()
print(df)
counter = 0
data = []
for row in df.iter_rows():
property_id = row[1]
created_at = row[2].date()
dict = {'property_id': property_id, 'created_at': created_at, 'name': row[3]}
jsonStr = row[0]
if jsonStr:
calendarDict = json.loads(jsonStr)
for key in calendarDict:
dict[key] = calendarDict[key]
data.append(dict)
dfNew = pl.from_dicts(data)
dfNew.write_csv('results/data_quality.csv')
print(dfNew)
'''
dfNew = pl.read_csv('results/data_quality.csv')
dfNew = dfNew.with_columns(pl.col("created_at").map_elements(lambda x: datetime.strptime(x, "%Y-%m-%d").date()))
# Create Row Means
dfTemp = dfNew
# Temporary Remove leading columns but save for later
prop = dfTemp.get_column('property_id')
dfTemp = dfTemp.drop('property_id')
crea = dfTemp.get_column('created_at')
dfTemp = dfTemp.drop('created_at')
name = dfTemp.get_column('name')
dfTemp = dfTemp.drop('name')
dfTemp = dfTemp.with_columns(sum=pl.sum_horizontal(dfTemp.columns))
sumCol = dfTemp.get_column('sum')
# Create new DF with only property_id, created_at ,Location name and sum
df = pl.DataFrame([prop, crea, name, sumCol])
df = df.sort('created_at')
# Create Full Copy
# 0 = Alles
# 1 = Heidiland
# 2 = Davos
# 3 = Engadin
# 4 = St. Moritz
filterList = ['Alle Regionen', 'Heidiland', 'Davos', 'Engadin', 'St. Moritz']
filter = 4
if filter != 0:
df = df.filter(pl.col("name") == filter)
# Remove Location name
df = df.drop('name')
# Get unique property_ids
propsIDs = df.unique(subset=["property_id"])
propsIDs = propsIDs.get_column("property_id").to_list()
propsIDs.sort()
# create Matrix
matrix = []
for id in propsIDs:
dict = {}
temp = df.filter(pl.col("property_id") == id)
for row in temp.iter_rows():
dict[row[1].strftime('%Y-%m-%d')] = row[2]
matrix.append(dict)
matrix = pl.DataFrame(matrix)
dates = matrix.columns
matrix = matrix.to_numpy()
# normalized
matrix = matrix/1111
yRange = range(len(dates))
xRange = range(len(propsIDs))
matrix = matrix.T
plt.imshow(matrix)
plt.yticks(yRange[::5], dates[::5])
plt.xticks(xRange[::10], propsIDs[::10])
plt.title(filterList[filter])
plt.xlabel("Property ID")
plt.ylabel("Scrape Date")
plt.colorbar()
plt.tight_layout()
# Create DiffMatrix
diffMatrix = np.zeros((len(matrix)-1, len(matrix[0])))
for y in range(len(matrix[0])):
for x in range(len(matrix)-1):
diffMatrix[x][y] = abs(matrix[x][y] - matrix[x+1][y])
plt.figure()
plt.imshow(diffMatrix, cmap="Reds")
plt.yticks(yRange[::5], dates[::5])
plt.xticks(xRange[::10], propsIDs[::10])
plt.title(filterList[filter])
plt.xlabel("Property ID")
plt.ylabel("Scrape Date")
plt.colorbar()
plt.tight_layout()
plt.show()

Some files were not shown because too many files have changed in this diff Show More