Merge branch 'v0.4.0' into v0.4_invite_overhaul

This commit is contained in:
Michael Quigley 2023-05-22 15:07:24 -04:00
commit 41c30e4158
No known key found for this signature in database
GPG Key ID: 9B60314A9DD20A62
222 changed files with 15099 additions and 1693 deletions

View File

@ -12,6 +12,7 @@ on:
jobs:
ubuntu-build:
name: Build Linux AMD64 CLI
runs-on: ubuntu-20.04
steps:
- uses: actions/checkout@v3
@ -38,4 +39,76 @@ jobs:
run: go install -ldflags "-X github.com/openziti/zrok/build.Version=${{ github.ref }} -X github.com/openziti/zrok/build.Hash=${{ github.sha }}" ./...
- name: test
run: go test -v ./...
run: go test -v ./...
- name: solve GOBIN
id: solve_go_bin
run: |
echo DEBUG: go_path="$(go env GOPATH)"
echo go_bin="$(go env GOPATH)/bin" >> $GITHUB_OUTPUT
- name: upload build artifact
uses: actions/upload-artifact@v3
with:
name: linux-amd64
path: ${{ steps.solve_go_bin.outputs.go_bin }}/zrok
if-no-files-found: error
# build a release candidate container image for branches named "main" or like "v*"
rc-container-build:
needs: ubuntu-build
if: github.ref == 'refs/heads/main' || startsWith(github.ref, 'refs/heads/v')
name: Build Release Candidate Container Image
runs-on: ubuntu-latest
steps:
- name: Set a container image tag from the branch name
id: slug
run: |
echo branch_tag=$(sed 's/[^a-z0-9_-]/__/gi' <<< "${GITHUB_REF#refs/heads/}") >> $GITHUB_OUTPUT
- name: Checkout Workspace
uses: actions/checkout@v3
- name: Download Branch Build Artifact
uses: actions/download-artifact@v3
with:
name: linux-amd64
path: ./dist/amd64/linux/
- name: Set Up QEMU
uses: docker/setup-qemu-action@v2
with:
platforms: amd64,arm64
- name: Set Up Docker BuildKit
id: buildx
uses: docker/setup-buildx-action@v2
- name: Login to Docker Hub
uses: docker/login-action@v2
with:
username: ${{ secrets.DOCKER_HUB_API_USER }}
password: ${{ secrets.DOCKER_HUB_API_TOKEN }}
- name: Set Up Container Image Tags for zrok CLI Container
env:
ZROK_CONTAINER_IMAGE_REPO: ${{ vars.ZROK_CONTAINER_IMAGE_REPO || 'openziti/zrok' }}
ZROK_CONTAINER_IMAGE_TAG: ${{ steps.slug.outputs.branch_tag }}
id: tagprep_cli
run: |
DOCKER_TAGS=""
DOCKER_TAGS="${ZROK_CONTAINER_IMAGE_REPO}:${ZROK_CONTAINER_IMAGE_TAG}"
echo "DEBUG: DOCKER_TAGS=${DOCKER_TAGS}"
echo DOCKER_TAGS="${DOCKER_TAGS}" >> $GITHUB_OUTPUT
- name: Build & Push Linux AMD64 CLI Container Image to Hub
uses: docker/build-push-action@v3
with:
builder: ${{ steps.buildx.outputs.name }}
context: ${{ github.workspace }}/
file: ${{ github.workspace }}/docker/images/zrok/Dockerfile
platforms: linux/amd64
tags: ${{ steps.tagprep_cli.outputs.DOCKER_TAGS }}
build-args: |
DOCKER_BUILD_DIR=./docker/images/zrok
ARTIFACTS_DIR=./dist
push: true

View File

@ -1,9 +1,25 @@
# v0.4.0
FEATURE: New metrics infrastructure based on OpenZiti usage events (https://github.com/openziti/zrok/issues/128). See the [v0.4 Metrics Guide](docs/guides/v0.4_metrics.md) for more information.
FEATURE: New `tcpTunnel` backend mode allowing for private sharing of local TCP sockets with other `zrok` users (https://github.com/openziti/zrok/issues/170)
FEATURE: New `udpTunnel` backend mode allowing for private sharing of local UDP sockets with other `zrok` users (https://github.com/openziti/zrok/issues/306)
FEATURE: New metrics infrastructure based on OpenZiti usage events (https://github.com/openziti/zrok/issues/128). See the [v0.4 Metrics Guide](docs/guides/metrics-and-limits/configuring-metrics.md) for more information.
FEATURE: New limits implementation based on the new metrics infrastructure (https://github.com/openziti/zrok/issues/235). See the [v0.4 Limits Guide](docs/guides/metrics-and-limits/configuring-limits.md) for more information.
CHANGE: The controller configuration version bumps from `v: 2` to `v: 3` to support all of the new `v0.4` functionality. See the [example ctrl.yml](etc/ctrl.yml) for details on the new configuration.
CHANGE: The underlying database store now utilizes a `deleted` flag on all tables to implement "soft deletes". This was necessary for the new metrics infrastructure, where we need to account for metrics data that arrived after the lifetime of a share or environment; and also we're going to need this for limits, where we need to see historical information about activity in the past (https://github.com/openziti/zrok/issues/262)
# v0.3.7
FIX: Improved TUI word-wrapping (https://github.com/openziti/zrok/issues/180)
# v0.3.6
CHANGE: Additional change to support branch builds (for CI purposes) and additional containerization efforts around k8s.
# v0.3.5
CHANGE: `zrok config set apiEndpoint` now validates that the new API endpoint correctly starts with `http://` or `https://` (https://github.com/openziti/zrok/issues/258)
@ -44,7 +60,7 @@ CHANGE: Incorporate initial docker image build (https://github.com/openziti/zrok
CHANGE: Improve target URL parsing for `zrok share` when using `--backend-mode` proxy (https://github.com/openziti/zrok/issues/211)
New and improved URL handling for proxy backends:
9090 -> http://127.0.0.1:9090
localhost:9090 -> http://127.0.0.1:9090
https://localhost:9090 -> https://localhost:9090

View File

@ -36,7 +36,7 @@ See the [Concepts and Getting Started Guide](docs/getting-started.md) for a full
The single `zrok` binary contains everything you need to operate `zrok` environments and also host your own service instances. Just add an OpenZiti network and you're up and running.
See the [Self-Hosting Guide](docs/guides/v0.3_self_hosting_guide.md) for details on getting your own `zrok` service instance running. This builds on top of the [OpenZiti Quick Start](https://docs.openziti.io/docs/learn/quickstarts/network/) to have a running `zrok` service instance in minutes.
See the [Self-Hosting Guide](docs/guides/self_hosting_guide.md) for details on getting your own `zrok` service instance running. This builds on top of the [OpenZiti Quick Start](https://docs.openziti.io/docs/learn/quickstarts/network/) to have a running `zrok` service instance in minutes.
## Building

View File

@ -5,7 +5,9 @@ import (
"github.com/go-openapi/runtime"
httptransport "github.com/go-openapi/runtime/client"
"github.com/openziti/zrok/endpoints"
"github.com/openziti/zrok/endpoints/privateFrontend"
"github.com/openziti/zrok/endpoints/proxy"
"github.com/openziti/zrok/endpoints/tcpTunnel"
"github.com/openziti/zrok/endpoints/udpTunnel"
"github.com/openziti/zrok/rest_client_zrok"
"github.com/openziti/zrok/rest_client_zrok/share"
"github.com/openziti/zrok/rest_model_zrok"
@ -17,8 +19,11 @@ import (
"os"
"os/signal"
"syscall"
"time"
)
var accessPrivateCmd *accessPrivateCommand
func init() {
accessCmd.AddCommand(newAccessPrivateCommand().cmd)
}
@ -45,14 +50,6 @@ func newAccessPrivateCommand() *accessPrivateCommand {
func (cmd *accessPrivateCommand) run(_ *cobra.Command, args []string) {
shrToken := args[0]
endpointUrl, err := url.Parse("http://" + cmd.bindAddress)
if err != nil {
if !panicInstead {
tui.Error("invalid endpoint address", err)
}
panic(err)
}
zrd, err := zrokdir.Load()
if err != nil {
tui.Error("unable to load zrokdir", err)
@ -85,10 +82,89 @@ func (cmd *accessPrivateCommand) run(_ *cobra.Command, args []string) {
}
logrus.Infof("allocated frontend '%v'", accessResp.Payload.FrontendToken)
cfg := privateFrontend.DefaultConfig("backend")
cfg.ShrToken = shrToken
cfg.Address = cmd.bindAddress
cfg.RequestsChan = make(chan *endpoints.Request, 1024)
protocol := "http://"
switch accessResp.Payload.BackendMode {
case "tcpTunnel":
protocol = "tcp://"
case "udpTunnel":
protocol = "udp://"
}
endpointUrl, err := url.Parse(protocol + cmd.bindAddress)
if err != nil {
if !panicInstead {
tui.Error("invalid endpoint address", err)
}
panic(err)
}
requests := make(chan *endpoints.Request, 1024)
switch accessResp.Payload.BackendMode {
case "tcpTunnel":
fe, err := tcpTunnel.NewFrontend(&tcpTunnel.FrontendConfig{
BindAddress: cmd.bindAddress,
IdentityName: "backend",
ShrToken: args[0],
RequestsChan: requests,
})
if err != nil {
if !panicInstead {
tui.Error("unable to create private frontend", err)
}
panic(err)
}
go func() {
if err := fe.Run(); err != nil {
if !panicInstead {
tui.Error("error starting frontend", err)
}
panic(err)
}
}()
case "udpTunnel":
fe, err := udpTunnel.NewFrontend(&udpTunnel.FrontendConfig{
BindAddress: cmd.bindAddress,
IdentityName: "backend",
ShrToken: args[0],
RequestsChan: requests,
IdleTime: time.Minute,
})
if err != nil {
if !panicInstead {
tui.Error("unable to create private frontend", err)
}
panic(err)
}
go func() {
if err := fe.Run(); err != nil {
if !panicInstead {
tui.Error("error starting frontend", err)
}
panic(err)
}
}()
default:
cfg := proxy.DefaultFrontendConfig("backend")
cfg.ShrToken = shrToken
cfg.Address = cmd.bindAddress
cfg.RequestsChan = requests
fe, err := proxy.NewFrontend(cfg)
if err != nil {
if !panicInstead {
tui.Error("unable to create private frontend", err)
}
panic(err)
}
go func() {
if err := fe.Run(); err != nil {
if !panicInstead {
tui.Error("unable to run frontend", err)
}
}
}()
}
c := make(chan os.Signal)
signal.Notify(c, os.Interrupt, syscall.SIGTERM)
@ -98,27 +174,11 @@ func (cmd *accessPrivateCommand) run(_ *cobra.Command, args []string) {
os.Exit(0)
}()
frontend, err := privateFrontend.NewHTTP(cfg)
if err != nil {
if !panicInstead {
tui.Error("unable to create private frontend", err)
}
panic(err)
}
go func() {
if err := frontend.Run(); err != nil {
if !panicInstead {
tui.Error("unable to run frontend", err)
}
}
}()
if cmd.headless {
logrus.Infof("access the zrok share at the followind endpoint: %v", endpointUrl.String())
for {
select {
case req := <-cfg.RequestsChan:
case req := <-requests:
logrus.Infof("%v -> %v %v", req.RemoteAddr, req.Method, req.Path)
}
}
@ -132,7 +192,7 @@ func (cmd *accessPrivateCommand) run(_ *cobra.Command, args []string) {
go func() {
for {
select {
case req := <-cfg.RequestsChan:
case req := <-requests:
if req != nil {
prg.Send(req)
}
@ -144,17 +204,16 @@ func (cmd *accessPrivateCommand) run(_ *cobra.Command, args []string) {
tui.Error("An error occurred", err)
}
close(cfg.RequestsChan)
close(requests)
cmd.destroy(accessResp.Payload.FrontendToken, zrd.Env.ZId, shrToken, zrok, auth)
}
}
func (cmd *accessPrivateCommand) destroy(frotendName, envZId, shrToken string, zrok *rest_client_zrok.Zrok, auth runtime.ClientAuthInfoWriter) {
func (cmd *accessPrivateCommand) destroy(frontendName, envZId, shrToken string, zrok *rest_client_zrok.Zrok, auth runtime.ClientAuthInfoWriter) {
logrus.Debugf("shutting down '%v'", shrToken)
req := share.NewUnaccessParams()
req.Body = &rest_model_zrok.UnaccessRequest{
FrontendToken: frotendName,
FrontendToken: frontendName,
ShrToken: shrToken,
EnvZID: envZId,
}

View File

@ -3,7 +3,7 @@ package main
import (
"fmt"
"github.com/michaelquigley/cf"
"github.com/openziti/zrok/endpoints/publicFrontend"
"github.com/openziti/zrok/endpoints/publicProxy"
"github.com/openziti/zrok/tui"
"github.com/sirupsen/logrus"
"github.com/spf13/cobra"
@ -33,7 +33,7 @@ func newAccessPublicCommand() *accessPublicCommand {
}
func (cmd *accessPublicCommand) run(_ *cobra.Command, args []string) {
cfg := publicFrontend.DefaultConfig()
cfg := publicProxy.DefaultConfig()
if len(args) == 1 {
if err := cfg.Load(args[0]); err != nil {
if !panicInstead {
@ -43,7 +43,7 @@ func (cmd *accessPublicCommand) run(_ *cobra.Command, args []string) {
}
}
logrus.Infof(cf.Dump(cfg, cf.DefaultOptions()))
frontend, err := publicFrontend.NewHTTP(cfg)
frontend, err := publicProxy.NewHTTP(cfg)
if err != nil {
if !panicInstead {
tui.Error("unable to create http frontend", err)

View File

@ -3,7 +3,7 @@ package main
import (
"fmt"
"github.com/michaelquigley/cf"
"github.com/openziti/zrok/endpoints/publicFrontend"
"github.com/openziti/zrok/endpoints/publicProxy"
"github.com/openziti/zrok/tui"
"github.com/sirupsen/logrus"
"github.com/spf13/cobra"
@ -29,7 +29,7 @@ func newAccessPublicValidateCommand() *accessPublicValidateCommand {
}
func (cmd *accessPublicValidateCommand) run(_ *cobra.Command, args []string) {
cfg := publicFrontend.DefaultConfig()
cfg := publicProxy.DefaultConfig()
if err := cfg.Load(args[0]); err != nil {
tui.Error(fmt.Sprintf("unable to load configuration '%v'", args[0]), err)
}

View File

@ -3,6 +3,7 @@ package main
import (
"github.com/michaelquigley/cf"
"github.com/openziti/zrok/controller"
"github.com/openziti/zrok/controller/config"
"github.com/sirupsen/logrus"
"github.com/spf13/cobra"
)
@ -26,13 +27,13 @@ func newAdminBootstrap() *adminBootstrap {
command := &adminBootstrap{cmd: cmd}
cmd.Run = command.run
cmd.Flags().BoolVar(&command.skipCtrl, "skip-ctrl", false, "Skip controller (ctrl) identity bootstrapping")
cmd.Flags().BoolVar(&command.skipFrontend, "skip-frontend", false, "Slip frontend identity bootstrapping")
cmd.Flags().BoolVar(&command.skipFrontend, "skip-frontend", false, "Skip frontend identity bootstrapping")
return command
}
func (cmd *adminBootstrap) run(_ *cobra.Command, args []string) {
configPath := args[0]
inCfg, err := controller.LoadConfig(configPath)
inCfg, err := config.LoadConfig(configPath)
if err != nil {
panic(err)
}

View File

@ -3,6 +3,7 @@ package main
import (
"github.com/michaelquigley/cf"
"github.com/openziti/zrok/controller"
"github.com/openziti/zrok/controller/config"
"github.com/sirupsen/logrus"
"github.com/spf13/cobra"
)
@ -27,7 +28,7 @@ func newAdminGcCommand() *adminGcCommand {
}
func (gc *adminGcCommand) run(_ *cobra.Command, args []string) {
cfg, err := controller.LoadConfig(args[0])
cfg, err := config.LoadConfig(args[0])
if err != nil {
panic(err)
}

View File

@ -3,14 +3,21 @@ package main
import (
"github.com/michaelquigley/cf"
"github.com/openziti/zrok/controller"
"github.com/openziti/zrok/controller/config"
"github.com/sirupsen/logrus"
"github.com/spf13/cobra"
)
var controllerCmd *controllerCommand
var metricsCmd = &cobra.Command{
Use: "metrics",
Short: "Metrics related commands",
}
func init() {
controllerCmd = newControllerCommand()
controllerCmd.cmd.AddCommand(metricsCmd)
rootCmd.AddCommand(controllerCmd.cmd)
}
@ -31,7 +38,7 @@ func newControllerCommand() *controllerCommand {
}
func (cmd *controllerCommand) run(_ *cobra.Command, args []string) {
cfg, err := controller.LoadConfig(args[0])
cfg, err := config.LoadConfig(args[0])
if err != nil {
panic(err)
}

View File

@ -0,0 +1,61 @@
package main
import (
"github.com/michaelquigley/cf"
"github.com/openziti/zrok/controller/config"
"github.com/openziti/zrok/controller/env"
"github.com/openziti/zrok/controller/metrics"
"github.com/sirupsen/logrus"
"github.com/spf13/cobra"
"os"
"os/signal"
"syscall"
"time"
)
func init() {
metricsCmd.AddCommand(newBridgeCommand().cmd)
}
type bridgeCommand struct {
cmd *cobra.Command
}
func newBridgeCommand() *bridgeCommand {
cmd := &cobra.Command{
Use: "bridge <configPath>",
Short: "Start a zrok metrics bridge",
Args: cobra.ExactArgs(1),
}
command := &bridgeCommand{cmd}
cmd.Run = command.run
return command
}
func (cmd *bridgeCommand) run(_ *cobra.Command, args []string) {
cfg, err := config.LoadConfig(args[0])
if err != nil {
panic(err)
}
logrus.Infof(cf.Dump(cfg, env.GetCfOptions()))
bridge, err := metrics.NewBridge(cfg.Bridge)
if err != nil {
panic(err)
}
if _, err = bridge.Start(); err != nil {
panic(err)
}
c := make(chan os.Signal)
signal.Notify(c, os.Interrupt, syscall.SIGTERM)
go func() {
<-c
bridge.Stop()
os.Exit(0)
}()
for {
time.Sleep(24 * 60 * time.Minute)
}
}

View File

@ -2,7 +2,7 @@ package main
import (
"github.com/michaelquigley/cf"
"github.com/openziti/zrok/controller"
"github.com/openziti/zrok/controller/config"
"github.com/openziti/zrok/tui"
"github.com/sirupsen/logrus"
"github.com/spf13/cobra"
@ -28,7 +28,7 @@ func newControllerValidateCommand() *controllerValidateCommand {
}
func (cmd *controllerValidateCommand) run(_ *cobra.Command, args []string) {
cfg, err := controller.LoadConfig(args[0])
cfg, err := config.LoadConfig(args[0])
if err != nil {
tui.Error("controller config validation failed", err)
}

View File

@ -2,6 +2,9 @@ package main
import (
"github.com/michaelquigley/pfxlog"
"github.com/openziti/transport/v2"
"github.com/openziti/transport/v2/tcp"
"github.com/openziti/transport/v2/udp"
"github.com/openziti/zrok/tui"
"github.com/sirupsen/logrus"
"github.com/spf13/cobra"
@ -24,6 +27,8 @@ func init() {
rootCmd.AddCommand(configCmd)
rootCmd.AddCommand(shareCmd)
rootCmd.AddCommand(testCmd)
transport.AddAddressParser(tcp.AddressParser{})
transport.AddAddressParser(udp.AddressParser{})
}
var rootCmd = &cobra.Command{

View File

@ -1,57 +0,0 @@
package main
import (
"github.com/michaelquigley/cf"
"github.com/openziti/zrok/controller/metrics"
"github.com/sirupsen/logrus"
"github.com/spf13/cobra"
"os"
"os/signal"
"syscall"
"time"
)
func init() {
rootCmd.AddCommand(newMetricsCommand().cmd)
}
type metricsCommand struct {
cmd *cobra.Command
}
func newMetricsCommand() *metricsCommand {
cmd := &cobra.Command{
Use: "metrics <configPath>",
Short: "Start a zrok metrics agent",
Args: cobra.ExactArgs(1),
}
command := &metricsCommand{cmd}
cmd.Run = command.run
return command
}
func (cmd *metricsCommand) run(_ *cobra.Command, args []string) {
cfg, err := metrics.LoadConfig(args[0])
if err != nil {
panic(err)
}
logrus.Infof(cf.Dump(cfg, metrics.GetCfOptions()))
ma, err := metrics.Run(cfg)
if err != nil {
panic(err)
}
c := make(chan os.Signal)
signal.Notify(c, os.Interrupt, syscall.SIGTERM)
go func() {
<-c
ma.Stop()
ma.Join()
os.Exit(0)
}()
for {
time.Sleep(30 * time.Minute)
}
}

View File

@ -6,8 +6,9 @@ import (
"github.com/go-openapi/runtime"
httptransport "github.com/go-openapi/runtime/client"
"github.com/openziti/zrok/endpoints"
"github.com/openziti/zrok/endpoints/proxyBackend"
"github.com/openziti/zrok/endpoints/webBackend"
"github.com/openziti/zrok/endpoints/proxy"
"github.com/openziti/zrok/endpoints/tcpTunnel"
"github.com/openziti/zrok/endpoints/udpTunnel"
"github.com/openziti/zrok/model"
"github.com/openziti/zrok/rest_client_zrok"
"github.com/openziti/zrok/rest_client_zrok/share"
@ -43,7 +44,7 @@ func newSharePrivateCommand() *sharePrivateCommand {
}
command := &sharePrivateCommand{cmd: cmd}
cmd.Flags().StringArrayVar(&command.basicAuth, "basic-auth", []string{}, "Basic authentication users (<username:password>,...")
cmd.Flags().StringVar(&command.backendMode, "backend-mode", "proxy", "The backend mode {proxy, web}")
cmd.Flags().StringVar(&command.backendMode, "backend-mode", "proxy", "The backend mode {proxy, web, tcpTunnel, udpTunnel}")
cmd.Flags().BoolVar(&command.headless, "headless", false, "Disable TUI and run headless")
cmd.Flags().BoolVar(&command.insecure, "insecure", false, "Enable insecure TLS certificate validation for <target>")
cmd.Run = command.run
@ -67,8 +68,14 @@ func (cmd *sharePrivateCommand) run(_ *cobra.Command, args []string) {
case "web":
target = args[0]
case "tcpTunnel":
target = args[0]
case "udpTunnel":
target = args[0]
default:
tui.Error(fmt.Sprintf("invalid backend mode '%v'; expected {proxy, web}", cmd.backendMode), nil)
tui.Error(fmt.Sprintf("invalid backend mode '%v'; expected {proxy, web, tcpTunnel}", cmd.backendMode), nil)
}
zrd, err := zrokdir.Load()
@ -139,7 +146,7 @@ func (cmd *sharePrivateCommand) run(_ *cobra.Command, args []string) {
requestsChan := make(chan *endpoints.Request, 1024)
switch cmd.backendMode {
case "proxy":
cfg := &proxyBackend.Config{
cfg := &proxy.BackendConfig{
IdentityPath: zif,
EndpointAddress: target,
ShrToken: resp.Payload.ShrToken,
@ -155,7 +162,7 @@ func (cmd *sharePrivateCommand) run(_ *cobra.Command, args []string) {
}
case "web":
cfg := &webBackend.Config{
cfg := &proxy.WebBackendConfig{
IdentityPath: zif,
WebRoot: target,
ShrToken: resp.Payload.ShrToken,
@ -169,6 +176,46 @@ func (cmd *sharePrivateCommand) run(_ *cobra.Command, args []string) {
panic(err)
}
case "tcpTunnel":
cfg := &tcpTunnel.BackendConfig{
IdentityPath: zif,
EndpointAddress: target,
ShrToken: resp.Payload.ShrToken,
RequestsChan: requestsChan,
}
be, err := tcpTunnel.NewBackend(cfg)
if err != nil {
if !panicInstead {
tui.Error("unable to create tcpTunnel backend", err)
}
panic(err)
}
go func() {
if err := be.Run(); err != nil {
logrus.Errorf("error running tcpTunnel backend: %v", err)
}
}()
case "udpTunnel":
cfg := &udpTunnel.BackendConfig{
IdentityPath: zif,
EndpointAddress: target,
ShrToken: resp.Payload.ShrToken,
RequestsChan: requestsChan,
}
be, err := udpTunnel.NewBackend(cfg)
if err != nil {
if !panicInstead {
tui.Error("unable to create udpTunnel backend", err)
}
panic(err)
}
go func() {
if err := be.Run(); err != nil {
logrus.Errorf("error running udpTunnel backend: %v", err)
}
}()
default:
tui.Error("invalid backend mode", nil)
}
@ -207,8 +254,8 @@ func (cmd *sharePrivateCommand) run(_ *cobra.Command, args []string) {
}
}
func (cmd *sharePrivateCommand) proxyBackendMode(cfg *proxyBackend.Config) (endpoints.RequestHandler, error) {
be, err := proxyBackend.NewBackend(cfg)
func (cmd *sharePrivateCommand) proxyBackendMode(cfg *proxy.BackendConfig) (endpoints.RequestHandler, error) {
be, err := proxy.NewBackend(cfg)
if err != nil {
return nil, errors.Wrap(err, "error creating http proxy backend")
}
@ -222,8 +269,8 @@ func (cmd *sharePrivateCommand) proxyBackendMode(cfg *proxyBackend.Config) (endp
return be, nil
}
func (cmd *sharePrivateCommand) webBackendMode(cfg *webBackend.Config) (endpoints.RequestHandler, error) {
be, err := webBackend.NewBackend(cfg)
func (cmd *sharePrivateCommand) webBackendMode(cfg *proxy.WebBackendConfig) (endpoints.RequestHandler, error) {
be, err := proxy.NewWebBackend(cfg)
if err != nil {
return nil, errors.Wrap(err, "error creating http web backend")
}

View File

@ -6,8 +6,7 @@ import (
"github.com/go-openapi/runtime"
httptransport "github.com/go-openapi/runtime/client"
"github.com/openziti/zrok/endpoints"
"github.com/openziti/zrok/endpoints/proxyBackend"
"github.com/openziti/zrok/endpoints/webBackend"
"github.com/openziti/zrok/endpoints/proxy"
"github.com/openziti/zrok/model"
"github.com/openziti/zrok/rest_client_zrok"
"github.com/openziti/zrok/rest_client_zrok/share"
@ -142,7 +141,7 @@ func (cmd *sharePublicCommand) run(_ *cobra.Command, args []string) {
requestsChan := make(chan *endpoints.Request, 1024)
switch cmd.backendMode {
case "proxy":
cfg := &proxyBackend.Config{
cfg := &proxy.BackendConfig{
IdentityPath: zif,
EndpointAddress: target,
ShrToken: resp.Payload.ShrToken,
@ -158,7 +157,7 @@ func (cmd *sharePublicCommand) run(_ *cobra.Command, args []string) {
}
case "web":
cfg := &webBackend.Config{
cfg := &proxy.WebBackendConfig{
IdentityPath: zif,
WebRoot: target,
ShrToken: resp.Payload.ShrToken,
@ -209,8 +208,8 @@ func (cmd *sharePublicCommand) run(_ *cobra.Command, args []string) {
}
}
func (cmd *sharePublicCommand) proxyBackendMode(cfg *proxyBackend.Config) (endpoints.RequestHandler, error) {
be, err := proxyBackend.NewBackend(cfg)
func (cmd *sharePublicCommand) proxyBackendMode(cfg *proxy.BackendConfig) (endpoints.RequestHandler, error) {
be, err := proxy.NewBackend(cfg)
if err != nil {
return nil, errors.Wrap(err, "error creating http proxy backend")
}
@ -224,8 +223,8 @@ func (cmd *sharePublicCommand) proxyBackendMode(cfg *proxyBackend.Config) (endpo
return be, nil
}
func (cmd *sharePublicCommand) webBackendMode(cfg *webBackend.Config) (endpoints.RequestHandler, error) {
be, err := webBackend.NewBackend(cfg)
func (cmd *sharePublicCommand) webBackendMode(cfg *proxy.WebBackendConfig) (endpoints.RequestHandler, error) {
be, err := proxy.NewWebBackend(cfg)
if err != nil {
return nil, errors.Wrap(err, "error creating http web backend")
}

View File

@ -5,8 +5,7 @@ import (
tea "github.com/charmbracelet/bubbletea"
httptransport "github.com/go-openapi/runtime/client"
"github.com/openziti/zrok/endpoints"
"github.com/openziti/zrok/endpoints/proxyBackend"
"github.com/openziti/zrok/endpoints/webBackend"
"github.com/openziti/zrok/endpoints/proxy"
"github.com/openziti/zrok/rest_client_zrok/metadata"
"github.com/openziti/zrok/rest_client_zrok/share"
"github.com/openziti/zrok/rest_model_zrok"
@ -108,7 +107,7 @@ func (cmd *shareReservedCommand) run(_ *cobra.Command, args []string) {
requestsChan := make(chan *endpoints.Request, 1024)
switch resp.Payload.BackendMode {
case "proxy":
cfg := &proxyBackend.Config{
cfg := &proxy.BackendConfig{
IdentityPath: zif,
EndpointAddress: target,
ShrToken: shrToken,
@ -124,7 +123,7 @@ func (cmd *shareReservedCommand) run(_ *cobra.Command, args []string) {
}
case "web":
cfg := &webBackend.Config{
cfg := &proxy.WebBackendConfig{
IdentityPath: zif,
WebRoot: target,
ShrToken: shrToken,
@ -187,8 +186,8 @@ func (cmd *shareReservedCommand) run(_ *cobra.Command, args []string) {
}
}
func (cmd *shareReservedCommand) proxyBackendMode(cfg *proxyBackend.Config) (endpoints.RequestHandler, error) {
be, err := proxyBackend.NewBackend(cfg)
func (cmd *shareReservedCommand) proxyBackendMode(cfg *proxy.BackendConfig) (endpoints.RequestHandler, error) {
be, err := proxy.NewBackend(cfg)
if err != nil {
return nil, errors.Wrap(err, "error creating http proxy backend")
}
@ -202,8 +201,8 @@ func (cmd *shareReservedCommand) proxyBackendMode(cfg *proxyBackend.Config) (end
return be, nil
}
func (cmd *shareReservedCommand) webBackendMode(cfg *webBackend.Config) (endpoints.RequestHandler, error) {
be, err := webBackend.NewBackend(cfg)
func (cmd *shareReservedCommand) webBackendMode(cfg *proxy.WebBackendConfig) (endpoints.RequestHandler, error) {
be, err := proxy.NewWebBackend(cfg)
if err != nil {
return nil, errors.Wrap(err, "error creating http web backend")
}

View File

@ -2,16 +2,20 @@ package main
import (
"fmt"
"strings"
"time"
tea "github.com/charmbracelet/bubbletea"
"github.com/charmbracelet/lipgloss"
"github.com/muesli/reflow/wordwrap"
"github.com/openziti/zrok/endpoints"
"strings"
"time"
)
const shareTuiBacklog = 256
var wordwrapCharacters = " -"
var wordwrapBreakpoints = map[rune]bool{' ': true, '-': true}
type shareModel struct {
shrToken string
frontendDescriptions []string
@ -144,6 +148,7 @@ func (m *shareModel) renderRequests() string {
}
}
}
requestLines = wrap(requestLines, m.width-2)
maxRows := shareRequestsStyle.GetHeight()
startRow := 0
if len(requestLines) > maxRows {
@ -183,6 +188,7 @@ func (m *shareModel) renderLog() string {
}
}
}
splitLines = wrap(splitLines, m.width-2)
maxRows := shareLogStyle.GetHeight()
startRow := 0
if len(splitLines) > maxRows {
@ -211,6 +217,38 @@ func (m *shareModel) Write(p []byte) (n int, err error) {
return len(p), nil
}
func wrap(lines []string, width int) []string {
ret := make([]string, 0)
for _, line := range lines {
if width <= 0 || len(line) <= width {
ret = append(ret, line)
continue
}
for i := 0; i <= len(line); {
max := i + width
if max > len(line) {
max = len(line)
}
if line[i:max] == "" {
continue
}
nextI := i + width
if max < len(line)-1 {
if !wordwrapBreakpoints[rune(line[max])] || !wordwrapBreakpoints[rune(line[max+1])] {
lastSpace := strings.LastIndexAny(line[:max], wordwrapCharacters)
if lastSpace > -1 {
max = lastSpace
nextI = lastSpace
}
}
}
ret = append(ret, strings.TrimSpace(line[i:max]))
i = nextI
}
}
return ret
}
var shareHeaderStyle = lipgloss.NewStyle().
BorderStyle(lipgloss.RoundedBorder()).
BorderForeground(lipgloss.Color("63")).

View File

@ -45,12 +45,12 @@ func (h *accessHandler) Handle(params share.AccessParams, principal *rest_model_
}
shrToken := params.Body.ShrToken
sshr, err := str.FindShareWithToken(shrToken, tx)
shr, err := str.FindShareWithToken(shrToken, tx)
if err != nil {
logrus.Errorf("error finding share")
return share.NewAccessNotFound()
}
if sshr == nil {
if shr == nil {
logrus.Errorf("unable to find share '%v' for user '%v'", shrToken, principal.Email)
return share.NewAccessNotFound()
}
@ -61,7 +61,7 @@ func (h *accessHandler) Handle(params share.AccessParams, principal *rest_model_
return share.NewAccessInternalServerError()
}
if _, err := str.CreateFrontend(envId, &store.Frontend{Token: feToken, ZId: envZId}, tx); err != nil {
if _, err := str.CreateFrontend(envId, &store.Frontend{PrivateShareId: &shr.Id, Token: feToken, ZId: envZId}, tx); err != nil {
logrus.Errorf("error creating frontend record for user '%v': %v", principal.Email, err)
return share.NewAccessInternalServerError()
}
@ -76,7 +76,7 @@ func (h *accessHandler) Handle(params share.AccessParams, principal *rest_model_
"zrokFrontendToken": feToken,
"zrokShareToken": shrToken,
}
if err := zrokEdgeSdk.CreateServicePolicyDial(envZId+"-"+sshr.ZId+"-dial", sshr.ZId, []string{envZId}, addlTags, edge); err != nil {
if err := zrokEdgeSdk.CreateServicePolicyDial(feToken+"-"+envZId+"-"+shr.ZId+"-dial", shr.ZId, []string{envZId}, addlTags, edge); err != nil {
logrus.Errorf("unable to create dial policy for user '%v': %v", principal.Email, err)
return share.NewAccessInternalServerError()
}
@ -86,5 +86,8 @@ func (h *accessHandler) Handle(params share.AccessParams, principal *rest_model_
return share.NewAccessInternalServerError()
}
return share.NewAccessCreated().WithPayload(&rest_model_zrok.AccessResponse{FrontendToken: feToken})
return share.NewAccessCreated().WithPayload(&rest_model_zrok.AccessResponse{
FrontendToken: feToken,
BackendMode: shr.BackendMode,
})
}

View File

@ -0,0 +1,55 @@
package controller
import (
"github.com/go-openapi/runtime/middleware"
"github.com/openziti/zrok/rest_model_zrok"
"github.com/openziti/zrok/rest_server_zrok/operations/metadata"
"github.com/sirupsen/logrus"
)
type accountDetailHandler struct{}
func newAccountDetailHandler() *accountDetailHandler {
return &accountDetailHandler{}
}
func (h *accountDetailHandler) Handle(params metadata.GetAccountDetailParams, principal *rest_model_zrok.Principal) middleware.Responder {
trx, err := str.Begin()
if err != nil {
logrus.Errorf("error stasrting transaction for '%v': %v", principal.Email, err)
return metadata.NewGetAccountDetailInternalServerError()
}
defer func() { _ = trx.Rollback() }()
envs, err := str.FindEnvironmentsForAccount(int(principal.ID), trx)
if err != nil {
logrus.Errorf("error retrieving environments for '%v': %v", principal.Email, err)
return metadata.NewGetAccountDetailInternalServerError()
}
sparkRx := make(map[int][]int64)
sparkTx := make(map[int][]int64)
if cfg.Metrics != nil && cfg.Metrics.Influx != nil {
sparkRx, sparkTx, err = sparkDataForEnvironments(envs)
if err != nil {
logrus.Errorf("error querying spark data for environments for '%v': %v", principal.Email, err)
}
} else {
logrus.Debug("skipping spark data for environments; no influx configuration")
}
var payload []*rest_model_zrok.Environment
for _, env := range envs {
var sparkData []*rest_model_zrok.SparkDataSample
for i := 0; i < len(sparkRx[env.Id]) && i < len(sparkTx[env.Id]); i++ {
sparkData = append(sparkData, &rest_model_zrok.SparkDataSample{Rx: float64(sparkRx[env.Id][i]), Tx: float64(sparkTx[env.Id][i])})
}
payload = append(payload, &rest_model_zrok.Environment{
Activity: sparkData,
Address: env.Address,
CreatedAt: env.CreatedAt.UnixMilli(),
Description: env.Description,
Host: env.Host,
UpdatedAt: env.UpdatedAt.UnixMilli(),
ZID: env.ZId,
})
}
return metadata.NewGetAccountDetailOK().WithPayload(payload)
}

View File

@ -12,7 +12,8 @@ import (
"github.com/openziti/edge/rest_model"
rest_model_edge "github.com/openziti/edge/rest_model"
"github.com/openziti/sdk-golang/ziti"
config2 "github.com/openziti/sdk-golang/ziti/config"
ziti_config "github.com/openziti/sdk-golang/ziti/config"
zrok_config "github.com/openziti/zrok/controller/config"
"github.com/openziti/zrok/controller/store"
"github.com/openziti/zrok/controller/zrokEdgeSdk"
"github.com/openziti/zrok/model"
@ -22,7 +23,7 @@ import (
"time"
)
func Bootstrap(skipCtrl, skipFrontend bool, inCfg *Config) error {
func Bootstrap(skipCtrl, skipFrontend bool, inCfg *zrok_config.Config) error {
cfg = inCfg
if v, err := store.Open(cfg.Store); err == nil {
@ -138,7 +139,7 @@ func getIdentityId(identityName string) (string, error) {
if err != nil {
return "", errors.Wrapf(err, "error opening identity '%v' from zrokdir", identityName)
}
zcfg, err := config2.NewFromFile(zif)
zcfg, err := ziti_config.NewFromFile(zif)
if err != nil {
return "", errors.Wrapf(err, "error loading ziti config from file '%v'", zif)
}

View File

@ -1,6 +1,10 @@
package controller
package config
import (
"github.com/openziti/zrok/controller/emailUi"
"github.com/openziti/zrok/controller/env"
"github.com/openziti/zrok/controller/limits"
"github.com/openziti/zrok/controller/metrics"
"github.com/openziti/zrok/controller/zrokEdgeSdk"
"time"
@ -9,20 +13,21 @@ import (
"github.com/pkg/errors"
)
const ConfigVersion = 2
const ConfigVersion = 3
type Config struct {
V int
Admin *AdminConfig
Bridge *metrics.BridgeConfig
Endpoint *EndpointConfig
Email *EmailConfig
Influx *InfluxConfig
Limits *LimitsConfig
Email *emailUi.Config
Limits *limits.Config
Maintenance *MaintenanceConfig
Metrics *metrics.Config
Registration *RegistrationConfig
ResetPassword *ResetPasswordConfig
Store *store.Config
Ziti *zrokEdgeSdk.ZitiConfig
Ziti *zrokEdgeSdk.Config
}
type AdminConfig struct {
@ -35,14 +40,6 @@ type EndpointConfig struct {
Port int
}
type EmailConfig struct {
Host string
Port int
Username string
Password string `cf:"+secret"`
From string
}
type RegistrationConfig struct {
RegistrationUrlTemplate string
TokenStrategy string
@ -52,13 +49,6 @@ type ResetPasswordConfig struct {
ResetUrlTemplate string
}
type InfluxConfig struct {
Url string
Bucket string
Org string
Token string `cf:"+secret"`
}
type MaintenanceConfig struct {
ResetPassword *ResetPasswordMaintenanceConfig
Registration *RegistrationMaintenanceConfig
@ -76,19 +66,9 @@ type ResetPasswordMaintenanceConfig struct {
BatchLimit int
}
const Unlimited = -1
type LimitsConfig struct {
Environments int
Shares int
}
func DefaultConfig() *Config {
return &Config{
Limits: &LimitsConfig{
Environments: Unlimited,
Shares: Unlimited,
},
Limits: limits.DefaultConfig(),
Maintenance: &MaintenanceConfig{
ResetPassword: &ResetPasswordMaintenanceConfig{
ExpirationTimeout: time.Minute * 15,
@ -106,7 +86,7 @@ func DefaultConfig() *Config {
func LoadConfig(path string) (*Config, error) {
cfg := DefaultConfig()
if err := cf.BindYaml(cfg, path, cf.DefaultOptions()); err != nil {
if err := cf.BindYaml(cfg, path, env.GetCfOptions()); err != nil {
return nil, errors.Wrapf(err, "error loading controller config '%v'", path)
}
if cfg.V != ConfigVersion {

View File

@ -3,15 +3,16 @@ package controller
import (
"github.com/go-openapi/runtime/middleware"
"github.com/openziti/zrok/build"
"github.com/openziti/zrok/controller/config"
"github.com/openziti/zrok/rest_model_zrok"
"github.com/openziti/zrok/rest_server_zrok/operations/metadata"
)
type configurationHandler struct {
cfg *Config
cfg *config.Config
}
func newConfigurationHandler(cfg *Config) *configurationHandler {
func newConfigurationHandler(cfg *config.Config) *configurationHandler {
return &configurationHandler{
cfg: cfg,
}

View File

@ -2,6 +2,10 @@ package controller
import (
"context"
"github.com/openziti/zrok/controller/config"
"github.com/openziti/zrok/controller/limits"
"github.com/openziti/zrok/controller/metrics"
"github.com/sirupsen/logrus"
"github.com/go-openapi/loads"
influxdb2 "github.com/influxdata/influxdb-client-go/v2"
@ -13,11 +17,12 @@ import (
"github.com/pkg/errors"
)
var cfg *Config
var cfg *config.Config
var str *store.Store
var idb influxdb2.Client
var limitsAgent *limits.Agent
func Run(inCfg *Config) error {
func Run(inCfg *config.Config) error {
cfg = inCfg
swaggerSpec, err := loads.Embedded(rest_server_zrok.SwaggerJSON, rest_server_zrok.FlatSwaggerJSON)
@ -39,15 +44,22 @@ func Run(inCfg *Config) error {
api.AdminInviteTokenGenerateHandler = newInviteTokenGenerateHandler()
api.AdminListFrontendsHandler = newListFrontendsHandler()
api.AdminUpdateFrontendHandler = newUpdateFrontendHandler()
api.EnvironmentEnableHandler = newEnableHandler(cfg.Limits)
api.EnvironmentEnableHandler = newEnableHandler()
api.EnvironmentDisableHandler = newDisableHandler()
api.MetadataGetAccountDetailHandler = newAccountDetailHandler()
api.MetadataConfigurationHandler = newConfigurationHandler(cfg)
if cfg.Metrics != nil && cfg.Metrics.Influx != nil {
api.MetadataGetAccountMetricsHandler = newGetAccountMetricsHandler(cfg.Metrics.Influx)
api.MetadataGetEnvironmentMetricsHandler = newGetEnvironmentMetricsHandler(cfg.Metrics.Influx)
api.MetadataGetShareMetricsHandler = newGetShareMetricsHandler(cfg.Metrics.Influx)
}
api.MetadataGetEnvironmentDetailHandler = newEnvironmentDetailHandler()
api.MetadataGetFrontendDetailHandler = newGetFrontendDetailHandler()
api.MetadataGetShareDetailHandler = newShareDetailHandler()
api.MetadataOverviewHandler = metadata.OverviewHandlerFunc(overviewHandler)
api.MetadataOverviewHandler = newOverviewHandler()
api.MetadataVersionHandler = metadata.VersionHandlerFunc(versionHandler)
api.ShareAccessHandler = newAccessHandler()
api.ShareShareHandler = newShareHandler(cfg.Limits)
api.ShareShareHandler = newShareHandler()
api.ShareUnaccessHandler = newUnaccessHandler()
api.ShareUnshareHandler = newUnshareHandler()
api.ShareUpdateShareHandler = newUpdateShareHandler()
@ -62,8 +74,31 @@ func Run(inCfg *Config) error {
return errors.Wrap(err, "error opening store")
}
if cfg.Influx != nil {
idb = influxdb2.NewClient(cfg.Influx.Url, cfg.Influx.Token)
if cfg.Metrics != nil && cfg.Metrics.Influx != nil {
idb = influxdb2.NewClient(cfg.Metrics.Influx.Url, cfg.Metrics.Influx.Token)
} else {
logrus.Warn("skipping influx client; no configuration")
}
if cfg.Metrics != nil && cfg.Metrics.Agent != nil && cfg.Metrics.Influx != nil {
ma, err := metrics.NewAgent(cfg.Metrics.Agent, str, cfg.Metrics.Influx)
if err != nil {
return errors.Wrap(err, "error creating metrics agent")
}
if err := ma.Start(); err != nil {
return errors.Wrap(err, "error starting metrics agent")
}
defer func() { ma.Stop() }()
if cfg.Limits != nil && cfg.Limits.Enforcing {
limitsAgent, err = limits.NewAgent(cfg.Limits, cfg.Metrics.Influx, cfg.Ziti, cfg.Email, str)
if err != nil {
return errors.Wrap(err, "error creating limits agent")
}
ma.AddUsageSink(limitsAgent)
limitsAgent.Start()
defer func() { limitsAgent.Stop() }()
}
}
ctx, cancel := context.WithCancel(context.Background())

View File

@ -100,10 +100,10 @@ func (h *disableHandler) removeSharesForEnvironment(envId int, tx *sqlx.Tx, edge
if err := zrokEdgeSdk.DeleteServiceEdgeRouterPolicy(env.ZId, shrToken, edge); err != nil {
logrus.Error(err)
}
if err := zrokEdgeSdk.DeleteServicePolicyDial(env.ZId, shrToken, edge); err != nil {
if err := zrokEdgeSdk.DeleteServicePoliciesDial(env.ZId, shrToken, edge); err != nil {
logrus.Error(err)
}
if err := zrokEdgeSdk.DeleteServicePolicyBind(env.ZId, shrToken, edge); err != nil {
if err := zrokEdgeSdk.DeleteServicePoliciesBind(env.ZId, shrToken, edge); err != nil {
logrus.Error(err)
}
if err := zrokEdgeSdk.DeleteConfig(env.ZId, shrToken, edge); err != nil {
@ -129,7 +129,7 @@ func (h *disableHandler) removeFrontendsForEnvironment(envId int, tx *sqlx.Tx, e
return err
}
for _, fe := range fes {
if err := zrokEdgeSdk.DeleteServicePolicy(env.ZId, fmt.Sprintf("tags.zrokFrontendToken=\"%v\" and type=1", fe.Token), edge); err != nil {
if err := zrokEdgeSdk.DeleteServicePolicies(env.ZId, fmt.Sprintf("tags.zrokFrontendToken=\"%v\" and type=1", fe.Token), edge); err != nil {
logrus.Errorf("error removing frontend access for '%v': %v", fe.Token, err)
}
}

View File

@ -0,0 +1,9 @@
package emailUi
type Config struct {
Host string
Port int
Username string
Password string `cf:"+secret"`
From string
}

View File

@ -2,5 +2,5 @@ package emailUi
import "embed"
//go:embed verify.gohtml verify.gotext resetPassword.gohtml resetPassword.gotext
//go:embed verify.gohtml verify.gotext resetPassword.gohtml resetPassword.gotext limitWarning.gohtml limitWarning.gotext
var FS embed.FS

View File

@ -0,0 +1,156 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<meta http-equiv="X-UA-Compatible" content="IE=edge,chrome=1">
<title>Transfer limit warning!</title>
<meta name="description" content="zrok Transfer Limit Warning">
<meta name="viewport" content="width=device-width">
<link rel="preconnect" href="https://fonts.googleapis.com">
<link rel="preconnect" href="https://fonts.gstatic.com" crossorigin>
<link href="https://fonts.googleapis.com/css2?family=JetBrains+Mono&display=swap" rel="stylesheet">
<style>
body {
margin: 0;
padding: 25;
font-family: 'JetBrains Mono', 'Courier New', monospace;
-webkit-font-smoothing: antialiased;
-moz-osx-font-smoothing: grayscale;
color: #ffffff;
background-color: #3b2693;
}
a:link {
color: #00d7e4;
}
a:visited {
color: #00d7e4;
}
a:hover,
a:active {
color: #ff0100;
}
.claim {
font-size: 2em;
margin: 0.5em 0 1em 0;
}
.container {
width: 62em;
margin: 2em auto;
max-width: 100%;
text-align: center;
}
.btn {
display: inline-block;
margin: .25em;
padding: 10px 16px;
font-size: 1.15em;
line-height: 1.33;
border-radius: 6px;
text-align: center;
white-space: nowrap;
vertical-align: middle;
text-decoration: none;
}
.btn-primary {
color: #ffffff;
background-color: #ff0100;
border-color: #ff0100;
}
a.btn-primary:link,
a.btn-primary:visited {
color: #ffffff;
}
a.btn-primary:hover,
a.btn-primary:active {
background-color: #cf0100;
}
.btn-secondary {
background-color: #b3b3b3;
border-color: #b3b3b3;
color: #252525;
font-weight: 800;
}
a.btn-secondary:link,
a.btn-secondary:visited {
color: #666;
}
a.btn-secondary:hover,
a.btn-secondary:hover {
background-color: #ccc;
color: #333;
}
.about {
margin: 1em auto;
}
.about td {
text-align: left;
}
.about td:first-child {
width: 80px;
}
@media screen and (max-width: 600px) {
img {
height: auto !important;
}
}
@media screen and (max-width: 400px) {
body {
font-size: 14px;
}
}
@media screen and (max-width: 320px) {
body {
font-size: 12px;
}
}
</style>
</head>
<body style="font-family: 'JetBrains Mono', 'Courier New', monospace; color: #ffffff; background-color: #3b2693; font-weight: 600;">
<div class="container">
<div class="banner" style="margin: auto;">
<img src="https://zrok.io/wp-content/uploads/2023/03/warning.jpg" width="363px" height="500px" style="padding-bottom: 10px;"/>
</div>
<div class="cta" style="text-align: center;">
<h3 style="text-align: center;">Your account is reaching a transfer limit, {{ .EmailAddress }}.</h3>
</div>
<div>
{{ .Detail }}
</div>
<table border="0" cellpadding="0" cellspacing="0" align="center" class="about">
<tr>
<td><a href="https://github.com/openziti/zrok" target="_blank">github.com/openziti/zrok</a></td>
</tr>
<tr>
<td>{{ .Version }}</td>
</tr>
</table>
<p style="text-align: center;"></a>Copyright © 2023 <a href="http://www.netfoundry.io" target="_blank" style="color: #00d7e4;">NetFoundry, Inc.</a></p>
</div>
</body>
</html>

View File

@ -0,0 +1,3 @@
Your account is nearing a transfer size limit, {{ .EmailAddress }}!
{{ .Detail }}

View File

@ -0,0 +1,25 @@
package emailUi
import (
"bytes"
"github.com/pkg/errors"
"text/template"
)
type WarningEmail struct {
EmailAddress string
Detail string
Version string
}
func (we WarningEmail) MergeTemplate(filename string) (string, error) {
t, err := template.ParseFS(FS, filename)
if err != nil {
return "", errors.Wrapf(err, "error parsing warning email template '%v'", filename)
}
buf := new(bytes.Buffer)
if err := t.Execute(buf, we); err != nil {
return "", errors.Wrapf(err, "error executing warning email template '%v'", filename)
}
return buf.String(), nil
}

View File

@ -13,24 +13,22 @@ import (
"github.com/sirupsen/logrus"
)
type enableHandler struct {
cfg *LimitsConfig
}
type enableHandler struct{}
func newEnableHandler(cfg *LimitsConfig) *enableHandler {
return &enableHandler{cfg: cfg}
func newEnableHandler() *enableHandler {
return &enableHandler{}
}
func (h *enableHandler) Handle(params environment.EnableParams, principal *rest_model_zrok.Principal) middleware.Responder {
// start transaction early; if it fails, don't bother creating ziti resources
tx, err := str.Begin()
trx, err := str.Begin()
if err != nil {
logrus.Errorf("error starting transaction for user '%v': %v", principal.Email, err)
return environment.NewEnableInternalServerError()
}
defer func() { _ = tx.Rollback() }()
defer func() { _ = trx.Rollback() }()
if err := h.checkLimits(principal, tx); err != nil {
if err := h.checkLimits(principal, trx); err != nil {
logrus.Errorf("limits error for user '%v': %v", principal.Email, err)
return environment.NewEnableUnauthorized()
}
@ -70,14 +68,14 @@ func (h *enableHandler) Handle(params environment.EnableParams, principal *rest_
Host: params.Body.Host,
Address: realRemoteAddress(params.HTTPRequest),
ZId: envZId,
}, tx)
}, trx)
if err != nil {
logrus.Errorf("error storing created identity for user '%v': %v", principal.Email, err)
_ = tx.Rollback()
_ = trx.Rollback()
return environment.NewEnableInternalServerError()
}
if err := tx.Commit(); err != nil {
if err := trx.Commit(); err != nil {
logrus.Errorf("error committing for user '%v': %v", principal.Email, err)
return environment.NewEnableInternalServerError()
}
@ -99,14 +97,16 @@ func (h *enableHandler) Handle(params environment.EnableParams, principal *rest_
return resp
}
func (h *enableHandler) checkLimits(principal *rest_model_zrok.Principal, tx *sqlx.Tx) error {
if !principal.Limitless && h.cfg.Environments > Unlimited {
envs, err := str.FindEnvironmentsForAccount(int(principal.ID), tx)
if err != nil {
return errors.Errorf("unable to find environments for account '%v': %v", principal.Email, err)
}
if len(envs)+1 > h.cfg.Environments {
return errors.Errorf("would exceed environments limit of %d for '%v'", h.cfg.Environments, principal.Email)
func (h *enableHandler) checkLimits(principal *rest_model_zrok.Principal, trx *sqlx.Tx) error {
if !principal.Limitless {
if limitsAgent != nil {
ok, err := limitsAgent.CanCreateEnvironment(int(principal.ID), trx)
if err != nil {
return errors.Wrapf(err, "error checking environment limits for '%v'", principal.Email)
}
if !ok {
return errors.Errorf("environment limit check failed for '%v'", principal.Email)
}
}
}
return nil

14
controller/env/cf.go vendored Normal file
View File

@ -0,0 +1,14 @@
package env
import (
"github.com/michaelquigley/cf"
)
var cfOpts *cf.Options
func GetCfOptions() *cf.Options {
if cfOpts == nil {
cfOpts = cf.DefaultOptions()
}
return cfOpts
}

View File

@ -25,7 +25,7 @@ func (h *environmentDetailHandler) Handle(params metadata.GetEnvironmentDetailPa
logrus.Errorf("environment '%v' not found for account '%v': %v", params.EnvZID, principal.Email, err)
return metadata.NewGetEnvironmentDetailNotFound()
}
es := &rest_model_zrok.EnvironmentShares{
es := &rest_model_zrok.EnvironmentAndResources{
Environment: &rest_model_zrok.Environment{
Address: senv.Address,
CreatedAt: senv.CreatedAt.UnixMilli(),
@ -40,12 +40,15 @@ func (h *environmentDetailHandler) Handle(params metadata.GetEnvironmentDetailPa
logrus.Errorf("error finding shares for environment '%v' for user '%v': %v", senv.ZId, principal.Email, err)
return metadata.NewGetEnvironmentDetailInternalServerError()
}
var sparkData map[string][]int64
if cfg.Influx != nil {
sparkData, err = sparkDataForShares(shrs)
sparkRx := make(map[string][]int64)
sparkTx := make(map[string][]int64)
if cfg.Metrics != nil && cfg.Metrics.Influx != nil {
sparkRx, sparkTx, err = sparkDataForShares(shrs)
if err != nil {
logrus.Errorf("error querying spark data for shares for user '%v': %v", principal.Email, err)
}
} else {
logrus.Debug("skipping spark data for shares; no influx configuration")
}
for _, shr := range shrs {
feEndpoint := ""
@ -60,6 +63,10 @@ func (h *environmentDetailHandler) Handle(params metadata.GetEnvironmentDetailPa
if shr.BackendProxyEndpoint != nil {
beProxyEndpoint = *shr.BackendProxyEndpoint
}
var sparkData []*rest_model_zrok.SparkDataSample
for i := 0; i < len(sparkRx[shr.Token]) && i < len(sparkTx[shr.Token]); i++ {
sparkData = append(sparkData, &rest_model_zrok.SparkDataSample{Rx: float64(sparkRx[shr.Token][i]), Tx: float64(sparkTx[shr.Token][i])})
}
es.Shares = append(es.Shares, &rest_model_zrok.Share{
Token: shr.Token,
ZID: shr.ZId,
@ -69,7 +76,7 @@ func (h *environmentDetailHandler) Handle(params metadata.GetEnvironmentDetailPa
FrontendEndpoint: feEndpoint,
BackendProxyEndpoint: beProxyEndpoint,
Reserved: shr.Reserved,
Metrics: sparkData[shr.Token],
Activity: sparkData,
CreatedAt: shr.CreatedAt.UnixMilli(),
UpdatedAt: shr.UpdatedAt.UnixMilli(),
})

View File

@ -0,0 +1,60 @@
package controller
import (
"github.com/go-openapi/runtime/middleware"
"github.com/openziti/zrok/rest_model_zrok"
"github.com/openziti/zrok/rest_server_zrok/operations/metadata"
"github.com/sirupsen/logrus"
)
type getFrontendDetailHandler struct{}
func newGetFrontendDetailHandler() *getFrontendDetailHandler {
return &getFrontendDetailHandler{}
}
func (h *getFrontendDetailHandler) Handle(params metadata.GetFrontendDetailParams, principal *rest_model_zrok.Principal) middleware.Responder {
trx, err := str.Begin()
if err != nil {
logrus.Errorf("error starting transaction: %v", err)
return metadata.NewGetFrontendDetailInternalServerError()
}
defer func() { _ = trx.Rollback() }()
fe, err := str.GetFrontend(int(params.FeID), trx)
if err != nil {
logrus.Errorf("error finding share '%d': %v", params.FeID, err)
return metadata.NewGetFrontendDetailNotFound()
}
envs, err := str.FindEnvironmentsForAccount(int(principal.ID), trx)
if err != nil {
logrus.Errorf("error finding environments for account '%v': %v", principal.Email, err)
return metadata.NewGetFrontendDetailInternalServerError()
}
found := false
if fe.EnvironmentId == nil {
logrus.Errorf("non owned environment '%d' for '%v'", fe.Id, principal.Email)
return metadata.NewGetFrontendDetailNotFound()
}
for _, env := range envs {
if *fe.EnvironmentId == env.Id {
found = true
break
}
}
if !found {
logrus.Errorf("environment not matched for frontend '%d' for account '%v'", fe.Id, principal.Email)
return metadata.NewGetFrontendDetailNotFound()
}
shr, err := str.GetShare(fe.Id, trx)
if err != nil {
logrus.Errorf("error getting share for frontend '%d': %v", fe.Id, err)
return metadata.NewGetFrontendDetailInternalServerError()
}
return metadata.NewGetFrontendDetailOK().WithPayload(&rest_model_zrok.Frontend{
ID: int64(fe.Id),
ShrToken: shr.Token,
ZID: fe.ZId,
CreatedAt: fe.CreatedAt.UnixMilli(),
UpdatedAt: fe.UpdatedAt.UnixMilli(),
})
}

View File

@ -8,6 +8,7 @@ import (
"github.com/openziti/edge/rest_management_api_client/service"
"github.com/openziti/edge/rest_management_api_client/service_edge_router_policy"
"github.com/openziti/edge/rest_management_api_client/service_policy"
zrok_config "github.com/openziti/zrok/controller/config"
"github.com/openziti/zrok/controller/store"
"github.com/openziti/zrok/controller/zrokEdgeSdk"
"github.com/pkg/errors"
@ -16,7 +17,7 @@ import (
"time"
)
func GC(inCfg *Config) error {
func GC(inCfg *zrok_config.Config) error {
cfg = inCfg
if v, err := store.Open(cfg.Store); err == nil {
str = v
@ -75,10 +76,10 @@ func gcServices(edge *rest_management_api_client.ZitiEdgeManagement, liveMap map
if err := zrokEdgeSdk.DeleteServiceEdgeRouterPolicy("gc", *svc.Name, edge); err != nil {
logrus.Errorf("error garbage collecting service edge router policy: %v", err)
}
if err := zrokEdgeSdk.DeleteServicePolicyDial("gc", *svc.Name, edge); err != nil {
if err := zrokEdgeSdk.DeleteServicePoliciesDial("gc", *svc.Name, edge); err != nil {
logrus.Errorf("error garbage collecting service dial policy: %v", err)
}
if err := zrokEdgeSdk.DeleteServicePolicyBind("gc", *svc.Name, edge); err != nil {
if err := zrokEdgeSdk.DeleteServicePoliciesBind("gc", *svc.Name, edge); err != nil {
logrus.Errorf("error garbage collecting service bind policy: %v", err)
}
if err := zrokEdgeSdk.DeleteConfig("gc", *svc.Name, edge); err != nil {
@ -136,7 +137,7 @@ func gcServicePolicies(edge *rest_management_api_client.ZitiEdgeManagement, live
if _, found := liveMap[spName]; !found {
logrus.Infof("garbage collecting, svcId='%v'", spName)
deleteFilter := fmt.Sprintf("id=\"%v\"", *sp.ID)
if err := zrokEdgeSdk.DeleteServicePolicy("gc", deleteFilter, edge); err != nil {
if err := zrokEdgeSdk.DeleteServicePolicies("gc", deleteFilter, edge); err != nil {
logrus.Errorf("error garbage collecting service policy: %v", err)
}
} else {

View File

@ -2,6 +2,7 @@ package controller
import (
"github.com/go-openapi/runtime/middleware"
"github.com/openziti/zrok/controller/config"
"github.com/openziti/zrok/controller/store"
"github.com/openziti/zrok/rest_server_zrok/operations/account"
"github.com/openziti/zrok/util"
@ -9,10 +10,10 @@ import (
)
type inviteHandler struct {
cfg *Config
cfg *config.Config
}
func newInviteHandler(cfg *Config) *inviteHandler {
func newInviteHandler(cfg *config.Config) *inviteHandler {
return &inviteHandler{
cfg: cfg,
}

View File

@ -0,0 +1,44 @@
package limits
import (
"github.com/jmoiron/sqlx"
"github.com/openziti/edge/rest_management_api_client"
"github.com/openziti/zrok/controller/store"
"github.com/openziti/zrok/controller/zrokEdgeSdk"
"github.com/pkg/errors"
"github.com/sirupsen/logrus"
)
type accountLimitAction struct {
str *store.Store
edge *rest_management_api_client.ZitiEdgeManagement
}
func newAccountLimitAction(str *store.Store, edge *rest_management_api_client.ZitiEdgeManagement) *accountLimitAction {
return &accountLimitAction{str, edge}
}
func (a *accountLimitAction) HandleAccount(acct *store.Account, rxBytes, txBytes int64, limit *BandwidthPerPeriod, trx *sqlx.Tx) error {
logrus.Infof("limiting '%v'", acct.Email)
envs, err := a.str.FindEnvironmentsForAccount(acct.Id, trx)
if err != nil {
return errors.Wrapf(err, "error finding environments for account '%v'", acct.Email)
}
for _, env := range envs {
shrs, err := a.str.FindSharesForEnvironment(env.Id, trx)
if err != nil {
return errors.Wrapf(err, "error finding shares for environment '%v'", env.ZId)
}
for _, shr := range shrs {
if err := zrokEdgeSdk.DeleteServicePoliciesDial(env.ZId, shr.Token, a.edge); err != nil {
return errors.Wrapf(err, "error deleting dial service policy for '%v'", shr.Token)
}
logrus.Infof("removed dial service policy for share '%v' of environment '%v'", shr.Token, env.ZId)
}
}
return nil
}

View File

@ -0,0 +1,49 @@
package limits
import (
"github.com/jmoiron/sqlx"
"github.com/openziti/edge/rest_management_api_client"
"github.com/openziti/zrok/controller/store"
"github.com/pkg/errors"
"github.com/sirupsen/logrus"
)
type accountRelaxAction struct {
str *store.Store
edge *rest_management_api_client.ZitiEdgeManagement
}
func newAccountRelaxAction(str *store.Store, edge *rest_management_api_client.ZitiEdgeManagement) *accountRelaxAction {
return &accountRelaxAction{str, edge}
}
func (a *accountRelaxAction) HandleAccount(acct *store.Account, _, _ int64, _ *BandwidthPerPeriod, trx *sqlx.Tx) error {
logrus.Infof("relaxing '%v'", acct.Email)
envs, err := a.str.FindEnvironmentsForAccount(acct.Id, trx)
if err != nil {
return errors.Wrapf(err, "error finding environments for account '%v'", acct.Email)
}
for _, env := range envs {
shrs, err := a.str.FindSharesForEnvironment(env.Id, trx)
if err != nil {
return errors.Wrapf(err, "error finding shares for environment '%v'", env.ZId)
}
for _, shr := range shrs {
switch shr.ShareMode {
case "public":
if err := relaxPublicShare(a.str, a.edge, shr, trx); err != nil {
return errors.Wrap(err, "error relaxing public share")
}
case "private":
if err := relaxPrivateShare(a.str, a.edge, shr, trx); err != nil {
return errors.Wrap(err, "error relaxing private share")
}
}
}
}
return nil
}

View File

@ -0,0 +1,53 @@
package limits
import (
"github.com/jmoiron/sqlx"
"github.com/openziti/edge/rest_management_api_client"
"github.com/openziti/zrok/controller/emailUi"
"github.com/openziti/zrok/controller/store"
"github.com/openziti/zrok/util"
"github.com/pkg/errors"
"github.com/sirupsen/logrus"
)
type accountWarningAction struct {
str *store.Store
edge *rest_management_api_client.ZitiEdgeManagement
cfg *emailUi.Config
}
func newAccountWarningAction(cfg *emailUi.Config, str *store.Store, edge *rest_management_api_client.ZitiEdgeManagement) *accountWarningAction {
return &accountWarningAction{str, edge, cfg}
}
func (a *accountWarningAction) HandleAccount(acct *store.Account, rxBytes, txBytes int64, limit *BandwidthPerPeriod, trx *sqlx.Tx) error {
logrus.Infof("warning '%v'", acct.Email)
if a.cfg != nil {
rxLimit := "(unlimited bytes)"
if limit.Limit.Rx != Unlimited {
rxLimit = util.BytesToSize(limit.Limit.Rx)
}
txLimit := "(unlimited bytes)"
if limit.Limit.Tx != Unlimited {
txLimit = util.BytesToSize(limit.Limit.Tx)
}
totalLimit := "(unlimited bytes)"
if limit.Limit.Total != Unlimited {
totalLimit = util.BytesToSize(limit.Limit.Total)
}
detail := newDetailMessage()
detail = detail.append("Your account has received %v and sent %v (for a total of %v), which has triggered a transfer limit warning.", util.BytesToSize(rxBytes), util.BytesToSize(txBytes), util.BytesToSize(rxBytes+txBytes))
detail = detail.append("This zrok instance only allows an account to receive %v, send %v, totalling not more than %v for each %v.", rxLimit, txLimit, totalLimit, limit.Period)
detail = detail.append("If you exceed the transfer limit, access to your shares will be temporarily disabled (until the last %v falls below the transfer limit)", limit.Period)
if err := sendLimitWarningEmail(a.cfg, acct.Email, detail); err != nil {
return errors.Wrapf(err, "error sending limit warning email to '%v'", acct.Email)
}
} else {
logrus.Warnf("skipping warning email for account limit; no email configuration specified")
}
return nil
}

669
controller/limits/agent.go Normal file
View File

@ -0,0 +1,669 @@
package limits
import (
"fmt"
"github.com/jmoiron/sqlx"
"github.com/openziti/zrok/controller/emailUi"
"github.com/openziti/zrok/controller/metrics"
"github.com/openziti/zrok/controller/store"
"github.com/openziti/zrok/controller/zrokEdgeSdk"
"github.com/openziti/zrok/util"
"github.com/pkg/errors"
"github.com/sirupsen/logrus"
"reflect"
"time"
)
type Agent struct {
cfg *Config
ifx *influxReader
zCfg *zrokEdgeSdk.Config
str *store.Store
queue chan *metrics.Usage
acctWarningActions []AccountAction
acctLimitActions []AccountAction
acctRelaxActions []AccountAction
envWarningActions []EnvironmentAction
envLimitActions []EnvironmentAction
envRelaxActions []EnvironmentAction
shrWarningActions []ShareAction
shrLimitActions []ShareAction
shrRelaxActions []ShareAction
close chan struct{}
join chan struct{}
}
func NewAgent(cfg *Config, ifxCfg *metrics.InfluxConfig, zCfg *zrokEdgeSdk.Config, emailCfg *emailUi.Config, str *store.Store) (*Agent, error) {
edge, err := zrokEdgeSdk.Client(zCfg)
if err != nil {
return nil, err
}
a := &Agent{
cfg: cfg,
ifx: newInfluxReader(ifxCfg),
zCfg: zCfg,
str: str,
queue: make(chan *metrics.Usage, 1024),
acctWarningActions: []AccountAction{newAccountWarningAction(emailCfg, str, edge)},
acctLimitActions: []AccountAction{newAccountLimitAction(str, edge)},
acctRelaxActions: []AccountAction{newAccountRelaxAction(str, edge)},
envWarningActions: []EnvironmentAction{newEnvironmentWarningAction(emailCfg, str, edge)},
envLimitActions: []EnvironmentAction{newEnvironmentLimitAction(str, edge)},
envRelaxActions: []EnvironmentAction{newEnvironmentRelaxAction(str, edge)},
shrWarningActions: []ShareAction{newShareWarningAction(emailCfg, str, edge)},
shrLimitActions: []ShareAction{newShareLimitAction(str, edge)},
shrRelaxActions: []ShareAction{newShareRelaxAction(str, edge)},
close: make(chan struct{}),
join: make(chan struct{}),
}
return a, nil
}
func (a *Agent) Start() {
go a.run()
}
func (a *Agent) Stop() {
close(a.close)
<-a.join
}
func (a *Agent) CanCreateEnvironment(acctId int, trx *sqlx.Tx) (bool, error) {
if a.cfg.Enforcing {
if empty, err := a.str.IsAccountLimitJournalEmpty(acctId, trx); err == nil && !empty {
alj, err := a.str.FindLatestAccountLimitJournal(acctId, trx)
if err != nil {
return false, err
}
if alj.Action == store.LimitAction {
return false, nil
}
} else if err != nil {
return false, err
}
if a.cfg.Environments > Unlimited {
envs, err := a.str.FindEnvironmentsForAccount(acctId, trx)
if err != nil {
return false, err
}
if len(envs)+1 > a.cfg.Environments {
return false, nil
}
}
}
return true, nil
}
func (a *Agent) CanCreateShare(acctId, envId int, trx *sqlx.Tx) (bool, error) {
if a.cfg.Enforcing {
if empty, err := a.str.IsAccountLimitJournalEmpty(acctId, trx); err == nil && !empty {
alj, err := a.str.FindLatestAccountLimitJournal(acctId, trx)
if err != nil {
return false, err
}
if alj.Action == store.LimitAction {
return false, nil
}
} else if err != nil {
return false, err
}
if empty, err := a.str.IsEnvironmentLimitJournalEmpty(envId, trx); err == nil && !empty {
elj, err := a.str.FindLatestEnvironmentLimitJournal(envId, trx)
if err != nil {
return false, err
}
if elj.Action == store.LimitAction {
return false, nil
}
} else if err != nil {
return false, err
}
if a.cfg.Shares > Unlimited {
envs, err := a.str.FindEnvironmentsForAccount(acctId, trx)
if err != nil {
return false, err
}
total := 0
for i := range envs {
shrs, err := a.str.FindSharesForEnvironment(envs[i].Id, trx)
if err != nil {
return false, errors.Wrapf(err, "unable to find shares for environment '%v'", envs[i].ZId)
}
total += len(shrs)
if total+1 > a.cfg.Shares {
return false, nil
}
logrus.Infof("total = %d", total)
}
}
}
return true, nil
}
func (a *Agent) Handle(u *metrics.Usage) error {
logrus.Debugf("handling: %v", u)
a.queue <- u
return nil
}
func (a *Agent) run() {
logrus.Info("started")
defer logrus.Info("stopped")
lastCycle := time.Now()
mainLoop:
for {
select {
case usage := <-a.queue:
if err := a.enforce(usage); err != nil {
logrus.Errorf("error running enforcement: %v", err)
}
if time.Since(lastCycle) > a.cfg.Cycle {
if err := a.relax(); err != nil {
logrus.Errorf("error running relax cycle: %v", err)
}
lastCycle = time.Now()
}
case <-time.After(a.cfg.Cycle):
if err := a.relax(); err != nil {
logrus.Errorf("error running relax cycle: %v", err)
}
lastCycle = time.Now()
case <-a.close:
close(a.join)
break mainLoop
}
}
}
func (a *Agent) enforce(u *metrics.Usage) error {
trx, err := a.str.Begin()
if err != nil {
return errors.Wrap(err, "error starting transaction")
}
defer func() { _ = trx.Rollback() }()
if enforce, warning, rxBytes, txBytes, err := a.checkAccountLimit(u.AccountId); err == nil {
if enforce {
enforced := false
var enforcedAt time.Time
if empty, err := a.str.IsAccountLimitJournalEmpty(int(u.AccountId), trx); err == nil && !empty {
if latest, err := a.str.FindLatestAccountLimitJournal(int(u.AccountId), trx); err == nil {
enforced = latest.Action == store.LimitAction
enforcedAt = latest.UpdatedAt
}
}
if !enforced {
_, err := a.str.CreateAccountLimitJournal(&store.AccountLimitJournal{
AccountId: int(u.AccountId),
RxBytes: rxBytes,
TxBytes: txBytes,
Action: store.LimitAction,
}, trx)
if err != nil {
return err
}
acct, err := a.str.GetAccount(int(u.AccountId), trx)
if err != nil {
return err
}
// run account limit actions
for _, action := range a.acctLimitActions {
if err := action.HandleAccount(acct, rxBytes, txBytes, a.cfg.Bandwidth.PerAccount, trx); err != nil {
return errors.Wrapf(err, "%v", reflect.TypeOf(action).String())
}
}
if err := trx.Commit(); err != nil {
return err
}
} else {
logrus.Debugf("already enforced limit for account '#%d' at %v", u.AccountId, enforcedAt)
}
} else if warning {
warned := false
var warnedAt time.Time
if empty, err := a.str.IsAccountLimitJournalEmpty(int(u.AccountId), trx); err == nil && !empty {
if latest, err := a.str.FindLatestAccountLimitJournal(int(u.AccountId), trx); err == nil {
warned = latest.Action == store.WarningAction || latest.Action == store.LimitAction
warnedAt = latest.UpdatedAt
}
}
if !warned {
_, err := a.str.CreateAccountLimitJournal(&store.AccountLimitJournal{
AccountId: int(u.AccountId),
RxBytes: rxBytes,
TxBytes: txBytes,
Action: store.WarningAction,
}, trx)
if err != nil {
return err
}
acct, err := a.str.GetAccount(int(u.AccountId), trx)
if err != nil {
return err
}
// run account warning actions
for _, action := range a.acctWarningActions {
if err := action.HandleAccount(acct, rxBytes, txBytes, a.cfg.Bandwidth.PerAccount, trx); err != nil {
return errors.Wrapf(err, "%v", reflect.TypeOf(action).String())
}
}
if err := trx.Commit(); err != nil {
return err
}
} else {
logrus.Debugf("already warned account '#%d' at %v", u.AccountId, warnedAt)
}
} else {
if enforce, warning, rxBytes, txBytes, err := a.checkEnvironmentLimit(u.EnvironmentId); err == nil {
if enforce {
enforced := false
var enforcedAt time.Time
if empty, err := a.str.IsEnvironmentLimitJournalEmpty(int(u.EnvironmentId), trx); err == nil && !empty {
if latest, err := a.str.FindLatestEnvironmentLimitJournal(int(u.EnvironmentId), trx); err == nil {
enforced = latest.Action == store.LimitAction
enforcedAt = latest.UpdatedAt
}
}
if !enforced {
_, err := a.str.CreateEnvironmentLimitJournal(&store.EnvironmentLimitJournal{
EnvironmentId: int(u.EnvironmentId),
RxBytes: rxBytes,
TxBytes: txBytes,
Action: store.LimitAction,
}, trx)
if err != nil {
return err
}
env, err := a.str.GetEnvironment(int(u.EnvironmentId), trx)
if err != nil {
return err
}
// run environment limit actions
for _, action := range a.envLimitActions {
if err := action.HandleEnvironment(env, rxBytes, txBytes, a.cfg.Bandwidth.PerEnvironment, trx); err != nil {
return errors.Wrapf(err, "%v", reflect.TypeOf(action).String())
}
}
if err := trx.Commit(); err != nil {
return err
}
} else {
logrus.Debugf("already enforced limit for environment '#%d' at %v", u.EnvironmentId, enforcedAt)
}
} else if warning {
warned := false
var warnedAt time.Time
if empty, err := a.str.IsEnvironmentLimitJournalEmpty(int(u.EnvironmentId), trx); err == nil && !empty {
if latest, err := a.str.FindLatestEnvironmentLimitJournal(int(u.EnvironmentId), trx); err == nil {
warned = latest.Action == store.WarningAction || latest.Action == store.LimitAction
warnedAt = latest.UpdatedAt
}
}
if !warned {
_, err := a.str.CreateEnvironmentLimitJournal(&store.EnvironmentLimitJournal{
EnvironmentId: int(u.EnvironmentId),
RxBytes: rxBytes,
TxBytes: txBytes,
Action: store.WarningAction,
}, trx)
if err != nil {
return err
}
env, err := a.str.GetEnvironment(int(u.EnvironmentId), trx)
if err != nil {
return err
}
// run environment warning actions
for _, action := range a.envWarningActions {
if err := action.HandleEnvironment(env, rxBytes, txBytes, a.cfg.Bandwidth.PerEnvironment, trx); err != nil {
return errors.Wrapf(err, "%v", reflect.TypeOf(action).String())
}
}
if err := trx.Commit(); err != nil {
return err
}
} else {
logrus.Debugf("already warned environment '#%d' at %v", u.EnvironmentId, warnedAt)
}
} else {
if enforce, warning, rxBytes, txBytes, err := a.checkShareLimit(u.ShareToken); err == nil {
if enforce {
shr, err := a.str.FindShareWithToken(u.ShareToken, trx)
if err != nil {
return err
}
enforced := false
var enforcedAt time.Time
if empty, err := a.str.IsShareLimitJournalEmpty(shr.Id, trx); err == nil && !empty {
if latest, err := a.str.FindLatestShareLimitJournal(shr.Id, trx); err == nil {
enforced = latest.Action == store.LimitAction
enforcedAt = latest.UpdatedAt
}
}
if !enforced {
_, err := a.str.CreateShareLimitJournal(&store.ShareLimitJournal{
ShareId: shr.Id,
RxBytes: rxBytes,
TxBytes: txBytes,
Action: store.LimitAction,
}, trx)
if err != nil {
return err
}
// run share limit actions
for _, action := range a.shrLimitActions {
if err := action.HandleShare(shr, rxBytes, txBytes, a.cfg.Bandwidth.PerShare, trx); err != nil {
return errors.Wrapf(err, "%v", reflect.TypeOf(action).String())
}
}
if err := trx.Commit(); err != nil {
return err
}
} else {
logrus.Debugf("already enforced limit for share '%v' at %v", shr.Token, enforcedAt)
}
} else if warning {
shr, err := a.str.FindShareWithToken(u.ShareToken, trx)
if err != nil {
return err
}
warned := false
var warnedAt time.Time
if empty, err := a.str.IsShareLimitJournalEmpty(shr.Id, trx); err == nil && !empty {
if latest, err := a.str.FindLatestShareLimitJournal(shr.Id, trx); err == nil {
warned = latest.Action == store.WarningAction || latest.Action == store.LimitAction
warnedAt = latest.UpdatedAt
}
}
if !warned {
_, err := a.str.CreateShareLimitJournal(&store.ShareLimitJournal{
ShareId: shr.Id,
RxBytes: rxBytes,
TxBytes: txBytes,
Action: store.WarningAction,
}, trx)
if err != nil {
return err
}
// run share warning actions
for _, action := range a.shrWarningActions {
if err := action.HandleShare(shr, rxBytes, txBytes, a.cfg.Bandwidth.PerShare, trx); err != nil {
return errors.Wrapf(err, "%v", reflect.TypeOf(action).String())
}
}
if err := trx.Commit(); err != nil {
return err
}
} else {
logrus.Debugf("already warned share '%v' at %v", shr.Token, warnedAt)
}
}
} else {
logrus.Error(err)
}
}
} else {
logrus.Error(err)
}
}
} else {
logrus.Error(err)
}
return nil
}
func (a *Agent) relax() error {
logrus.Debug("relaxing")
trx, err := a.str.Begin()
if err != nil {
return errors.Wrap(err, "error starting transaction")
}
defer func() { _ = trx.Rollback() }()
commit := false
if sljs, err := a.str.FindAllLatestShareLimitJournal(trx); err == nil {
for _, slj := range sljs {
if shr, err := a.str.GetShare(slj.ShareId, trx); err == nil {
if slj.Action == store.WarningAction || slj.Action == store.LimitAction {
if enforce, warning, rxBytes, txBytes, err := a.checkShareLimit(shr.Token); err == nil {
if !enforce && !warning {
if slj.Action == store.LimitAction {
// run relax actions for share
for _, action := range a.shrRelaxActions {
if err := action.HandleShare(shr, rxBytes, txBytes, a.cfg.Bandwidth.PerShare, trx); err != nil {
return errors.Wrapf(err, "%v", reflect.TypeOf(action).String())
}
}
} else {
logrus.Infof("relaxing warning for '%v'", shr.Token)
}
if err := a.str.DeleteShareLimitJournalForShare(shr.Id, trx); err == nil {
commit = true
} else {
logrus.Errorf("error deleting share_limit_journal for '%v'", shr.Token)
}
} else {
logrus.Infof("share '%v' still over limit", shr.Token)
}
} else {
logrus.Errorf("error checking share limit for '%v': %v", shr.Token, err)
}
}
} else {
logrus.Errorf("error getting share for '#%d': %v", slj.ShareId, err)
}
}
} else {
return err
}
if eljs, err := a.str.FindAllLatestEnvironmentLimitJournal(trx); err == nil {
for _, elj := range eljs {
if env, err := a.str.GetEnvironment(elj.EnvironmentId, trx); err == nil {
if elj.Action == store.WarningAction || elj.Action == store.LimitAction {
if enforce, warning, rxBytes, txBytes, err := a.checkEnvironmentLimit(int64(elj.EnvironmentId)); err == nil {
if !enforce && !warning {
if elj.Action == store.LimitAction {
// run relax actions for environment
for _, action := range a.envRelaxActions {
if err := action.HandleEnvironment(env, rxBytes, txBytes, a.cfg.Bandwidth.PerEnvironment, trx); err != nil {
return errors.Wrapf(err, "%v", reflect.TypeOf(action).String())
}
}
} else {
logrus.Infof("relaxing warning for '%v'", env.ZId)
}
if err := a.str.DeleteEnvironmentLimitJournalForEnvironment(env.Id, trx); err == nil {
commit = true
} else {
logrus.Errorf("error deleteing environment_limit_journal for '%v': %v", env.ZId, err)
}
} else {
logrus.Infof("environment '%v' still over limit", env.ZId)
}
} else {
logrus.Errorf("error checking environment limit for '%v': %v", env.ZId, err)
}
}
} else {
logrus.Errorf("error getting environment for '#%d': %v", elj.EnvironmentId, err)
}
}
} else {
return err
}
if aljs, err := a.str.FindAllLatestAccountLimitJournal(trx); err == nil {
for _, alj := range aljs {
if acct, err := a.str.GetAccount(alj.AccountId, trx); err == nil {
if alj.Action == store.WarningAction || alj.Action == store.LimitAction {
if enforce, warning, rxBytes, txBytes, err := a.checkAccountLimit(int64(alj.AccountId)); err == nil {
if !enforce && !warning {
if alj.Action == store.LimitAction {
// run relax actions for account
for _, action := range a.acctRelaxActions {
if err := action.HandleAccount(acct, rxBytes, txBytes, a.cfg.Bandwidth.PerAccount, trx); err != nil {
return errors.Wrapf(err, "%v", reflect.TypeOf(action).String())
}
}
} else {
logrus.Infof("relaxing warning for '%v'", acct.Email)
}
if err := a.str.DeleteAccountLimitJournalForAccount(acct.Id, trx); err == nil {
commit = true
} else {
logrus.Errorf("error deleting account_limit_journal for '%v': %v", acct.Email, err)
}
} else {
logrus.Infof("account '%v' still over limit", acct.Email)
}
} else {
logrus.Errorf("error checking account limit for '%v': %v", acct.Email, err)
}
}
} else {
logrus.Errorf("error getting account for '#%d': %v", alj.AccountId, err)
}
}
} else {
return err
}
if commit {
if err := trx.Commit(); err != nil {
return err
}
}
return nil
}
func (a *Agent) checkAccountLimit(acctId int64) (enforce, warning bool, rxBytes, txBytes int64, err error) {
period := 24 * time.Hour
limit := DefaultBandwidthPerPeriod()
if a.cfg.Bandwidth != nil && a.cfg.Bandwidth.PerAccount != nil {
limit = a.cfg.Bandwidth.PerAccount
}
if limit.Period > 0 {
period = limit.Period
}
rx, tx, err := a.ifx.totalRxTxForAccount(acctId, period)
if err != nil {
logrus.Error(err)
}
enforce, warning = a.checkLimit(limit, rx, tx)
return enforce, warning, rx, tx, nil
}
func (a *Agent) checkEnvironmentLimit(envId int64) (enforce, warning bool, rxBytes, txBytes int64, err error) {
period := 24 * time.Hour
limit := DefaultBandwidthPerPeriod()
if a.cfg.Bandwidth != nil && a.cfg.Bandwidth.PerEnvironment != nil {
limit = a.cfg.Bandwidth.PerEnvironment
}
if limit.Period > 0 {
period = limit.Period
}
rx, tx, err := a.ifx.totalRxTxForEnvironment(envId, period)
if err != nil {
logrus.Error(err)
}
enforce, warning = a.checkLimit(limit, rx, tx)
return enforce, warning, rx, tx, nil
}
func (a *Agent) checkShareLimit(shrToken string) (enforce, warning bool, rxBytes, txBytes int64, err error) {
period := 24 * time.Hour
limit := DefaultBandwidthPerPeriod()
if a.cfg.Bandwidth != nil && a.cfg.Bandwidth.PerShare != nil {
limit = a.cfg.Bandwidth.PerShare
}
if limit.Period > 0 {
period = limit.Period
}
rx, tx, err := a.ifx.totalRxTxForShare(shrToken, period)
if err != nil {
logrus.Error(err)
}
enforce, warning = a.checkLimit(limit, rx, tx)
if enforce || warning {
logrus.Debugf("'%v': %v", shrToken, describeLimit(limit, rx, tx))
}
return enforce, warning, rx, tx, nil
}
func (a *Agent) checkLimit(cfg *BandwidthPerPeriod, rx, tx int64) (enforce, warning bool) {
if cfg.Limit.Rx != Unlimited && rx > cfg.Limit.Rx {
return true, false
}
if cfg.Limit.Tx != Unlimited && tx > cfg.Limit.Tx {
return true, false
}
if cfg.Limit.Total != Unlimited && rx+tx > cfg.Limit.Total {
return true, false
}
if cfg.Warning.Rx != Unlimited && rx > cfg.Warning.Rx {
return false, true
}
if cfg.Warning.Tx != Unlimited && tx > cfg.Warning.Tx {
return false, true
}
if cfg.Warning.Total != Unlimited && rx+tx > cfg.Warning.Total {
return false, true
}
return false, false
}
func describeLimit(cfg *BandwidthPerPeriod, rx, tx int64) string {
out := ""
if cfg.Limit.Rx != Unlimited && rx > cfg.Limit.Rx {
out += fmt.Sprintf("['%v' over rx limit '%v']", util.BytesToSize(rx), util.BytesToSize(cfg.Limit.Rx))
}
if cfg.Limit.Tx != Unlimited && tx > cfg.Limit.Tx {
out += fmt.Sprintf("['%v' over tx limit '%v']", util.BytesToSize(tx), util.BytesToSize(cfg.Limit.Tx))
}
if cfg.Limit.Total != Unlimited && rx+tx > cfg.Limit.Total {
out += fmt.Sprintf("['%v' over total limit '%v']", util.BytesToSize(rx+tx), util.BytesToSize(cfg.Limit.Total))
}
if cfg.Warning.Rx != Unlimited && rx > cfg.Warning.Rx {
out += fmt.Sprintf("['%v' over rx warning '%v']", util.BytesToSize(rx), util.BytesToSize(cfg.Warning.Rx))
}
if cfg.Warning.Tx != Unlimited && tx > cfg.Warning.Tx {
out += fmt.Sprintf("['%v' over tx warning '%v']", util.BytesToSize(tx), util.BytesToSize(cfg.Warning.Tx))
}
if cfg.Warning.Total != Unlimited && rx+tx > cfg.Warning.Total {
out += fmt.Sprintf("['%v' over total warning '%v']", util.BytesToSize(rx+tx), util.BytesToSize(cfg.Warning.Total))
}
return out
}

View File

@ -0,0 +1,61 @@
package limits
import "time"
const Unlimited = -1
type Config struct {
Environments int
Shares int
Bandwidth *BandwidthConfig
Cycle time.Duration
Enforcing bool
}
type BandwidthConfig struct {
PerAccount *BandwidthPerPeriod
PerEnvironment *BandwidthPerPeriod
PerShare *BandwidthPerPeriod
}
type BandwidthPerPeriod struct {
Period time.Duration
Warning *Bandwidth
Limit *Bandwidth
}
type Bandwidth struct {
Rx int64
Tx int64
Total int64
}
func DefaultBandwidthPerPeriod() *BandwidthPerPeriod {
return &BandwidthPerPeriod{
Period: 24 * time.Hour,
Warning: &Bandwidth{
Rx: Unlimited,
Tx: Unlimited,
Total: Unlimited,
},
Limit: &Bandwidth{
Rx: Unlimited,
Tx: Unlimited,
Total: Unlimited,
},
}
}
func DefaultConfig() *Config {
return &Config{
Environments: Unlimited,
Shares: Unlimited,
Bandwidth: &BandwidthConfig{
PerAccount: DefaultBandwidthPerPeriod(),
PerEnvironment: DefaultBandwidthPerPeriod(),
PerShare: DefaultBandwidthPerPeriod(),
},
Enforcing: false,
Cycle: 15 * time.Minute,
}
}

View File

@ -0,0 +1,92 @@
package limits
import (
"fmt"
"github.com/openziti/zrok/build"
"github.com/openziti/zrok/controller/emailUi"
"github.com/pkg/errors"
"github.com/sirupsen/logrus"
"github.com/wneessen/go-mail"
)
type detailMessage struct {
lines []string
}
func newDetailMessage() *detailMessage {
return &detailMessage{}
}
func (m *detailMessage) append(msg string, args ...interface{}) *detailMessage {
m.lines = append(m.lines, fmt.Sprintf(msg, args...))
return m
}
func (m *detailMessage) html() string {
out := ""
for i := range m.lines {
out += fmt.Sprintf("<p style=\"text-align: left;\">%s</p>\n", m.lines[i])
}
return out
}
func (m *detailMessage) plain() string {
out := ""
for i := range m.lines {
out += fmt.Sprintf("%s\n\n", m.lines[i])
}
return out
}
func sendLimitWarningEmail(cfg *emailUi.Config, emailTo string, d *detailMessage) error {
emailData := &emailUi.WarningEmail{
EmailAddress: emailTo,
Version: build.String(),
}
emailData.Detail = d.plain()
plainBody, err := emailData.MergeTemplate("limitWarning.gotext")
if err != nil {
return err
}
emailData.Detail = d.html()
htmlBody, err := emailData.MergeTemplate("limitWarning.gohtml")
if err != nil {
return err
}
msg := mail.NewMsg()
if err := msg.From(cfg.From); err != nil {
return errors.Wrap(err, "failed to set from address in limit warning email")
}
if err := msg.To(emailTo); err != nil {
return errors.Wrap(err, "failed to set to address in limit warning email")
}
msg.Subject("zrok Limit Warning Notification")
msg.SetDate()
msg.SetMessageID()
msg.SetBulk()
msg.SetImportance(mail.ImportanceHigh)
msg.SetBodyString(mail.TypeTextPlain, plainBody)
msg.SetBodyString(mail.TypeTextHTML, htmlBody)
client, err := mail.NewClient(cfg.Host,
mail.WithPort(cfg.Port),
mail.WithSMTPAuth(mail.SMTPAuthPlain),
mail.WithUsername(cfg.Username),
mail.WithPassword(cfg.Password),
mail.WithTLSPolicy(mail.TLSMandatory),
)
if err != nil {
return errors.Wrap(err, "error creating limit warning email client")
}
if err := client.DialAndSend(msg); err != nil {
return errors.Wrap(err, "error sending limit warning email")
}
logrus.Infof("limit warning email sent to '%v'", emailTo)
return nil
}

View File

@ -0,0 +1,37 @@
package limits
import (
"github.com/jmoiron/sqlx"
"github.com/openziti/edge/rest_management_api_client"
"github.com/openziti/zrok/controller/store"
"github.com/openziti/zrok/controller/zrokEdgeSdk"
"github.com/pkg/errors"
"github.com/sirupsen/logrus"
)
type environmentLimitAction struct {
str *store.Store
edge *rest_management_api_client.ZitiEdgeManagement
}
func newEnvironmentLimitAction(str *store.Store, edge *rest_management_api_client.ZitiEdgeManagement) *environmentLimitAction {
return &environmentLimitAction{str, edge}
}
func (a *environmentLimitAction) HandleEnvironment(env *store.Environment, _, _ int64, _ *BandwidthPerPeriod, trx *sqlx.Tx) error {
logrus.Infof("limiting '%v'", env.ZId)
shrs, err := a.str.FindSharesForEnvironment(env.Id, trx)
if err != nil {
return errors.Wrapf(err, "error finding shares for environment '%v'", env.ZId)
}
for _, shr := range shrs {
if err := zrokEdgeSdk.DeleteServicePoliciesDial(env.ZId, shr.Token, a.edge); err != nil {
return errors.Wrapf(err, "error deleting dial service policy for '%v'", shr.Token)
}
logrus.Infof("removed dial service policy for share '%v' of environment '%v'", shr.Token, env.ZId)
}
return nil
}

View File

@ -0,0 +1,44 @@
package limits
import (
"github.com/jmoiron/sqlx"
"github.com/openziti/edge/rest_management_api_client"
"github.com/openziti/zrok/controller/store"
"github.com/pkg/errors"
"github.com/sirupsen/logrus"
)
type environmentRelaxAction struct {
str *store.Store
edge *rest_management_api_client.ZitiEdgeManagement
}
func newEnvironmentRelaxAction(str *store.Store, edge *rest_management_api_client.ZitiEdgeManagement) *environmentRelaxAction {
return &environmentRelaxAction{str, edge}
}
func (a *environmentRelaxAction) HandleEnvironment(env *store.Environment, rxBytes, txBytes int64, limit *BandwidthPerPeriod, trx *sqlx.Tx) error {
logrus.Infof("relaxing '%v'", env.ZId)
shrs, err := a.str.FindSharesForEnvironment(env.Id, trx)
if err != nil {
return errors.Wrapf(err, "error finding shares for environment '%v'", env.ZId)
}
for _, shr := range shrs {
if !shr.Deleted {
switch shr.ShareMode {
case "public":
if err := relaxPublicShare(a.str, a.edge, shr, trx); err != nil {
return err
}
case "private":
if err := relaxPrivateShare(a.str, a.edge, shr, trx); err != nil {
return err
}
}
}
}
return nil
}

View File

@ -0,0 +1,60 @@
package limits
import (
"github.com/jmoiron/sqlx"
"github.com/openziti/edge/rest_management_api_client"
"github.com/openziti/zrok/controller/emailUi"
"github.com/openziti/zrok/controller/store"
"github.com/openziti/zrok/util"
"github.com/pkg/errors"
"github.com/sirupsen/logrus"
)
type environmentWarningAction struct {
str *store.Store
edge *rest_management_api_client.ZitiEdgeManagement
cfg *emailUi.Config
}
func newEnvironmentWarningAction(cfg *emailUi.Config, str *store.Store, edge *rest_management_api_client.ZitiEdgeManagement) *environmentWarningAction {
return &environmentWarningAction{str, edge, cfg}
}
func (a *environmentWarningAction) HandleEnvironment(env *store.Environment, rxBytes, txBytes int64, limit *BandwidthPerPeriod, trx *sqlx.Tx) error {
logrus.Infof("warning '%v'", env.ZId)
if a.cfg != nil {
if env.AccountId != nil {
acct, err := a.str.GetAccount(*env.AccountId, trx)
if err != nil {
return err
}
rxLimit := "unlimited bytes"
if limit.Limit.Rx != Unlimited {
rxLimit = util.BytesToSize(limit.Limit.Rx)
}
txLimit := "unlimited bytes"
if limit.Limit.Tx != Unlimited {
txLimit = util.BytesToSize(limit.Limit.Tx)
}
totalLimit := "unlimited bytes"
if limit.Limit.Total != Unlimited {
totalLimit = util.BytesToSize(limit.Limit.Total)
}
detail := newDetailMessage()
detail = detail.append("Your environment '%v' has received %v and sent %v (for a total of %v), which has triggered a transfer limit warning.", env.Description, util.BytesToSize(rxBytes), util.BytesToSize(txBytes), util.BytesToSize(rxBytes+txBytes))
detail = detail.append("This zrok instance only allows a share to receive %v, send %v, totalling not more than %v for each %v.", rxLimit, txLimit, totalLimit, limit.Period)
detail = detail.append("If you exceed the transfer limit, access to your shares will be temporarily disabled (until the last %v falls below the transfer limit).", limit.Period)
if err := sendLimitWarningEmail(a.cfg, acct.Email, detail); err != nil {
return errors.Wrapf(err, "error sending limit warning email to '%v'", acct.Email)
}
}
} else {
logrus.Warnf("skipping warning email for environment limit; no email configuration specified")
}
return nil
}

View File

@ -0,0 +1,88 @@
package limits
import (
"context"
"fmt"
influxdb2 "github.com/influxdata/influxdb-client-go/v2"
"github.com/influxdata/influxdb-client-go/v2/api"
"github.com/openziti/zrok/controller/metrics"
"github.com/pkg/errors"
"github.com/sirupsen/logrus"
"strings"
"time"
)
type influxReader struct {
cfg *metrics.InfluxConfig
idb influxdb2.Client
queryApi api.QueryAPI
}
func newInfluxReader(cfg *metrics.InfluxConfig) *influxReader {
idb := influxdb2.NewClient(cfg.Url, cfg.Token)
queryApi := idb.QueryAPI(cfg.Org)
return &influxReader{cfg, idb, queryApi}
}
func (r *influxReader) totalRxTxForAccount(acctId int64, duration time.Duration) (int64, int64, error) {
query := fmt.Sprintf("from(bucket: \"%v\")\n", r.cfg.Bucket) +
fmt.Sprintf("|> range(start: -%v)\n", duration) +
"|> filter(fn: (r) => r[\"_measurement\"] == \"xfer\")\n" +
"|> filter(fn: (r) => r[\"_field\"] == \"rx\" or r[\"_field\"] == \"tx\")\n" +
"|> filter(fn: (r) => r[\"namespace\"] == \"backend\")\n" +
fmt.Sprintf("|> filter(fn: (r) => r[\"acctId\"] == \"%d\")\n", acctId) +
"|> drop(columns: [\"share\", \"envId\"])\n" +
"|> sum()"
return r.runQueryForRxTx(query)
}
func (r *influxReader) totalRxTxForEnvironment(envId int64, duration time.Duration) (int64, int64, error) {
query := fmt.Sprintf("from(bucket: \"%v\")\n", r.cfg.Bucket) +
fmt.Sprintf("|> range(start: -%v)\n", duration) +
"|> filter(fn: (r) => r[\"_measurement\"] == \"xfer\")\n" +
"|> filter(fn: (r) => r[\"_field\"] == \"rx\" or r[\"_field\"] == \"tx\")\n" +
"|> filter(fn: (r) => r[\"namespace\"] == \"backend\")\n" +
fmt.Sprintf("|> filter(fn: (r) => r[\"envId\"] == \"%d\")\n", envId) +
"|> drop(columns: [\"share\", \"acctId\"])\n" +
"|> sum()"
return r.runQueryForRxTx(query)
}
func (r *influxReader) totalRxTxForShare(shrToken string, duration time.Duration) (int64, int64, error) {
query := fmt.Sprintf("from(bucket: \"%v\")\n", r.cfg.Bucket) +
fmt.Sprintf("|> range(start: -%v)\n", duration) +
"|> filter(fn: (r) => r[\"_measurement\"] == \"xfer\")\n" +
"|> filter(fn: (r) => r[\"_field\"] == \"rx\" or r[\"_field\"] == \"tx\")\n" +
"|> filter(fn: (r) => r[\"namespace\"] == \"backend\")\n" +
fmt.Sprintf("|> filter(fn: (r) => r[\"share\"] == \"%v\")\n", shrToken) +
"|> sum()"
return r.runQueryForRxTx(query)
}
func (r *influxReader) runQueryForRxTx(query string) (rx int64, tx int64, err error) {
result, err := r.queryApi.Query(context.Background(), query)
if err != nil {
return -1, -1, err
}
count := 0
for result.Next() {
if v, ok := result.Record().Value().(int64); ok {
switch result.Record().Field() {
case "tx":
tx = v
case "rx":
rx = v
default:
logrus.Warnf("field '%v'?", result.Record().Field())
}
} else {
return -1, -1, errors.New("error asserting value type")
}
count++
}
if count != 0 && count != 2 {
return -1, -1, errors.Errorf("expected 2 results; got '%d' (%v)", count, strings.ReplaceAll(query, "\n", ""))
}
return rx, tx, nil
}

View File

@ -0,0 +1,18 @@
package limits
import (
"github.com/jmoiron/sqlx"
"github.com/openziti/zrok/controller/store"
)
type AccountAction interface {
HandleAccount(a *store.Account, rxBytes, txBytes int64, limit *BandwidthPerPeriod, trx *sqlx.Tx) error
}
type EnvironmentAction interface {
HandleEnvironment(e *store.Environment, rxBytes, txBytes int64, limit *BandwidthPerPeriod, trx *sqlx.Tx) error
}
type ShareAction interface {
HandleShare(s *store.Share, rxBytes, txBytes int64, limit *BandwidthPerPeriod, trx *sqlx.Tx) error
}

View File

@ -0,0 +1,34 @@
package limits
import (
"github.com/jmoiron/sqlx"
"github.com/openziti/edge/rest_management_api_client"
"github.com/openziti/zrok/controller/store"
"github.com/openziti/zrok/controller/zrokEdgeSdk"
"github.com/sirupsen/logrus"
)
type shareLimitAction struct {
str *store.Store
edge *rest_management_api_client.ZitiEdgeManagement
}
func newShareLimitAction(str *store.Store, edge *rest_management_api_client.ZitiEdgeManagement) *shareLimitAction {
return &shareLimitAction{str, edge}
}
func (a *shareLimitAction) HandleShare(shr *store.Share, _, _ int64, _ *BandwidthPerPeriod, trx *sqlx.Tx) error {
logrus.Infof("limiting '%v'", shr.Token)
env, err := a.str.GetEnvironment(shr.EnvironmentId, trx)
if err != nil {
return err
}
if err := zrokEdgeSdk.DeleteServicePoliciesDial(env.ZId, shr.Token, a.edge); err != nil {
return err
}
logrus.Infof("removed dial service policy for '%v'", shr.Token)
return nil
}

View File

@ -0,0 +1,83 @@
package limits
import (
"github.com/jmoiron/sqlx"
"github.com/openziti/edge/rest_management_api_client"
"github.com/openziti/zrok/controller/store"
"github.com/openziti/zrok/controller/zrokEdgeSdk"
"github.com/pkg/errors"
"github.com/sirupsen/logrus"
)
type shareRelaxAction struct {
str *store.Store
edge *rest_management_api_client.ZitiEdgeManagement
}
func newShareRelaxAction(str *store.Store, edge *rest_management_api_client.ZitiEdgeManagement) *shareRelaxAction {
return &shareRelaxAction{str, edge}
}
func (a *shareRelaxAction) HandleShare(shr *store.Share, _, _ int64, _ *BandwidthPerPeriod, trx *sqlx.Tx) error {
logrus.Infof("relaxing '%v'", shr.Token)
if !shr.Deleted {
switch shr.ShareMode {
case "public":
if err := relaxPublicShare(a.str, a.edge, shr, trx); err != nil {
return err
}
case "private":
if err := relaxPrivateShare(a.str, a.edge, shr, trx); err != nil {
return err
}
}
}
return nil
}
func relaxPublicShare(str *store.Store, edge *rest_management_api_client.ZitiEdgeManagement, shr *store.Share, trx *sqlx.Tx) error {
env, err := str.GetEnvironment(shr.EnvironmentId, trx)
if err != nil {
return errors.Wrap(err, "error finding environment")
}
fe, err := str.FindFrontendPubliclyNamed(*shr.FrontendSelection, trx)
if err != nil {
return errors.Wrapf(err, "error finding frontend name '%v' for '%v'", *shr.FrontendSelection, shr.Token)
}
if err := zrokEdgeSdk.CreateServicePolicyDial(env.ZId+"-"+shr.ZId+"-dial", shr.ZId, []string{fe.ZId}, zrokEdgeSdk.ZrokShareTags(shr.Token).SubTags, edge); err != nil {
return errors.Wrapf(err, "error creating dial service policy for '%v'", shr.Token)
}
logrus.Infof("added dial service policy for '%v'", shr.Token)
return nil
}
func relaxPrivateShare(str *store.Store, edge *rest_management_api_client.ZitiEdgeManagement, shr *store.Share, trx *sqlx.Tx) error {
fes, err := str.FindFrontendsForPrivateShare(shr.Id, trx)
if err != nil {
return errors.Wrapf(err, "error finding frontends for share '%v'", shr.Token)
}
for _, fe := range fes {
if fe.EnvironmentId != nil {
env, err := str.GetEnvironment(*fe.EnvironmentId, trx)
if err != nil {
return errors.Wrapf(err, "error getting environment for frontend '%v'", fe.Token)
}
addlTags := map[string]interface{}{
"zrokEnvironmentZId": env.ZId,
"zrokFrontendToken": fe.Token,
"zrokShareToken": shr.Token,
}
if err := zrokEdgeSdk.CreateServicePolicyDial(fe.Token+"-"+env.ZId+"-"+shr.ZId+"-dial", shr.ZId, []string{env.ZId}, addlTags, edge); err != nil {
return errors.Wrapf(err, "unable to create dial policy for frontend '%v'", fe.Token)
}
logrus.Infof("added dial service policy for share '%v' to private frontend '%v'", shr.Token, fe.Token)
}
}
return nil
}

View File

@ -0,0 +1,65 @@
package limits
import (
"github.com/jmoiron/sqlx"
"github.com/openziti/edge/rest_management_api_client"
"github.com/openziti/zrok/controller/emailUi"
"github.com/openziti/zrok/controller/store"
"github.com/openziti/zrok/util"
"github.com/pkg/errors"
"github.com/sirupsen/logrus"
)
type shareWarningAction struct {
str *store.Store
edge *rest_management_api_client.ZitiEdgeManagement
cfg *emailUi.Config
}
func newShareWarningAction(cfg *emailUi.Config, str *store.Store, edge *rest_management_api_client.ZitiEdgeManagement) *shareWarningAction {
return &shareWarningAction{str, edge, cfg}
}
func (a *shareWarningAction) HandleShare(shr *store.Share, rxBytes, txBytes int64, limit *BandwidthPerPeriod, trx *sqlx.Tx) error {
logrus.Infof("warning '%v'", shr.Token)
if a.cfg != nil {
env, err := a.str.GetEnvironment(shr.EnvironmentId, trx)
if err != nil {
return err
}
if env.AccountId != nil {
acct, err := a.str.GetAccount(*env.AccountId, trx)
if err != nil {
return err
}
rxLimit := "unlimited bytes"
if limit.Limit.Rx != Unlimited {
rxLimit = util.BytesToSize(limit.Limit.Rx)
}
txLimit := "unlimited bytes"
if limit.Limit.Tx != Unlimited {
txLimit = util.BytesToSize(limit.Limit.Tx)
}
totalLimit := "unlimited bytes"
if limit.Limit.Total != Unlimited {
totalLimit = util.BytesToSize(limit.Limit.Total)
}
detail := newDetailMessage()
detail = detail.append("Your share '%v' has received %v and sent %v (for a total of %v), which has triggered a transfer limit warning.", shr.Token, util.BytesToSize(rxBytes), util.BytesToSize(txBytes), util.BytesToSize(rxBytes+txBytes))
detail = detail.append("This zrok instance only allows a share to receive %v, send %v, totalling not more than %v for each %v.", rxLimit, txLimit, totalLimit, limit.Period)
detail = detail.append("If you exceed the transfer limit, access to your shares will be temporarily disabled (until the last %v falls below the transfer limit).", limit.Period)
if err := sendLimitWarningEmail(a.cfg, acct.Email, detail); err != nil {
return errors.Wrapf(err, "error sending limit warning email to '%v'", acct.Email)
}
}
} else {
logrus.Warnf("skipping warning email for share limit; no email configuration specified")
}
return nil
}

View File

@ -3,6 +3,7 @@ package controller
import (
"context"
"fmt"
"github.com/openziti/zrok/controller/config"
"strings"
"time"
@ -11,11 +12,11 @@ import (
)
type maintenanceRegistrationAgent struct {
cfg *RegistrationMaintenanceConfig
cfg *config.RegistrationMaintenanceConfig
ctx context.Context
}
func newRegistrationMaintenanceAgent(ctx context.Context, cfg *RegistrationMaintenanceConfig) *maintenanceRegistrationAgent {
func newRegistrationMaintenanceAgent(ctx context.Context, cfg *config.RegistrationMaintenanceConfig) *maintenanceRegistrationAgent {
return &maintenanceRegistrationAgent{
cfg: cfg,
ctx: ctx,
@ -78,11 +79,11 @@ func (ma *maintenanceRegistrationAgent) deleteExpiredAccountRequests() error {
}
type maintenanceResetPasswordAgent struct {
cfg *ResetPasswordMaintenanceConfig
cfg *config.ResetPasswordMaintenanceConfig
ctx context.Context
}
func newMaintenanceResetPasswordAgent(ctx context.Context, cfg *ResetPasswordMaintenanceConfig) *maintenanceResetPasswordAgent {
func newMaintenanceResetPasswordAgent(ctx context.Context, cfg *config.ResetPasswordMaintenanceConfig) *maintenanceResetPasswordAgent {
return &maintenanceResetPasswordAgent{
cfg: cfg,
ctx: ctx,

261
controller/metrics.go Normal file
View File

@ -0,0 +1,261 @@
package controller
import (
"context"
"fmt"
"github.com/go-openapi/runtime/middleware"
influxdb2 "github.com/influxdata/influxdb-client-go/v2"
"github.com/influxdata/influxdb-client-go/v2/api"
"github.com/openziti/zrok/controller/metrics"
"github.com/openziti/zrok/rest_model_zrok"
"github.com/openziti/zrok/rest_server_zrok/operations/metadata"
"github.com/sirupsen/logrus"
"time"
)
type getAccountMetricsHandler struct {
cfg *metrics.InfluxConfig
idb influxdb2.Client
queryApi api.QueryAPI
}
func newGetAccountMetricsHandler(cfg *metrics.InfluxConfig) *getAccountMetricsHandler {
idb := influxdb2.NewClient(cfg.Url, cfg.Token)
queryApi := idb.QueryAPI(cfg.Org)
return &getAccountMetricsHandler{
cfg: cfg,
idb: idb,
queryApi: queryApi,
}
}
func (h *getAccountMetricsHandler) Handle(params metadata.GetAccountMetricsParams, principal *rest_model_zrok.Principal) middleware.Responder {
duration := 30 * 24 * time.Hour
if params.Duration != nil {
v, err := time.ParseDuration(*params.Duration)
if err != nil {
logrus.Errorf("bad duration '%v' for '%v': %v", *params.Duration, principal.Email, err)
return metadata.NewGetAccountMetricsBadRequest()
}
duration = v
}
slice := sliceSize(duration)
query := fmt.Sprintf("from(bucket: \"%v\")\n", h.cfg.Bucket) +
fmt.Sprintf("|> range(start: -%v)\n", duration) +
"|> filter(fn: (r) => r[\"_measurement\"] == \"xfer\")\n" +
"|> filter(fn: (r) => r[\"_field\"] == \"rx\" or r[\"_field\"] == \"tx\")\n" +
"|> filter(fn: (r) => r[\"namespace\"] == \"backend\")\n" +
fmt.Sprintf("|> filter(fn: (r) => r[\"acctId\"] == \"%d\")\n", principal.ID) +
"|> drop(columns: [\"share\", \"envId\"])\n" +
fmt.Sprintf("|> aggregateWindow(every: %v, fn: sum, createEmpty: true)", slice)
rx, tx, timestamps, err := runFluxForRxTxArray(query, h.queryApi)
if err != nil {
logrus.Errorf("error running account metrics query for '%v': %v", principal.Email, err)
return metadata.NewGetAccountMetricsInternalServerError()
}
response := &rest_model_zrok.Metrics{
Scope: "account",
ID: fmt.Sprintf("%d", principal.ID),
Period: duration.Seconds(),
}
for i := 0; i < len(rx) && i < len(tx) && i < len(timestamps); i++ {
response.Samples = append(response.Samples, &rest_model_zrok.MetricsSample{
Rx: rx[i],
Tx: tx[i],
Timestamp: timestamps[i],
})
}
return metadata.NewGetAccountMetricsOK().WithPayload(response)
}
type getEnvironmentMetricsHandler struct {
cfg *metrics.InfluxConfig
idb influxdb2.Client
queryApi api.QueryAPI
}
func newGetEnvironmentMetricsHandler(cfg *metrics.InfluxConfig) *getEnvironmentMetricsHandler {
idb := influxdb2.NewClient(cfg.Url, cfg.Token)
queryApi := idb.QueryAPI(cfg.Org)
return &getEnvironmentMetricsHandler{
cfg: cfg,
idb: idb,
queryApi: queryApi,
}
}
func (h *getEnvironmentMetricsHandler) Handle(params metadata.GetEnvironmentMetricsParams, principal *rest_model_zrok.Principal) middleware.Responder {
trx, err := str.Begin()
if err != nil {
logrus.Errorf("error starting transaction: %v", err)
return metadata.NewGetEnvironmentMetricsInternalServerError()
}
defer func() { _ = trx.Rollback() }()
env, err := str.FindEnvironmentForAccount(params.EnvID, int(principal.ID), trx)
if err != nil {
logrus.Errorf("error finding environment '%s' for '%s': %v", params.EnvID, principal.Email, err)
return metadata.NewGetEnvironmentMetricsUnauthorized()
}
duration := 30 * 24 * time.Hour
if params.Duration != nil {
v, err := time.ParseDuration(*params.Duration)
if err != nil {
logrus.Errorf("bad duration '%v' for '%v': %v", *params.Duration, principal.Email, err)
return metadata.NewGetAccountMetricsBadRequest()
}
duration = v
}
slice := sliceSize(duration)
query := fmt.Sprintf("from(bucket: \"%v\")\n", h.cfg.Bucket) +
fmt.Sprintf("|> range(start: -%v)\n", duration) +
"|> filter(fn: (r) => r[\"_measurement\"] == \"xfer\")\n" +
"|> filter(fn: (r) => r[\"_field\"] == \"rx\" or r[\"_field\"] == \"tx\")\n" +
"|> filter(fn: (r) => r[\"namespace\"] == \"backend\")\n" +
fmt.Sprintf("|> filter(fn: (r) => r[\"envId\"] == \"%d\")\n", int64(env.Id)) +
"|> drop(columns: [\"share\", \"acctId\"])\n" +
fmt.Sprintf("|> aggregateWindow(every: %v, fn: sum, createEmpty: true)", slice)
rx, tx, timestamps, err := runFluxForRxTxArray(query, h.queryApi)
if err != nil {
logrus.Errorf("error running account metrics query for '%v': %v", principal.Email, err)
return metadata.NewGetAccountMetricsInternalServerError()
}
response := &rest_model_zrok.Metrics{
Scope: "account",
ID: fmt.Sprintf("%d", principal.ID),
Period: duration.Seconds(),
}
for i := 0; i < len(rx) && i < len(tx) && i < len(timestamps); i++ {
response.Samples = append(response.Samples, &rest_model_zrok.MetricsSample{
Rx: rx[i],
Tx: tx[i],
Timestamp: timestamps[i],
})
}
return metadata.NewGetEnvironmentMetricsOK().WithPayload(response)
}
type getShareMetricsHandler struct {
cfg *metrics.InfluxConfig
idb influxdb2.Client
queryApi api.QueryAPI
}
func newGetShareMetricsHandler(cfg *metrics.InfluxConfig) *getShareMetricsHandler {
idb := influxdb2.NewClient(cfg.Url, cfg.Token)
queryApi := idb.QueryAPI(cfg.Org)
return &getShareMetricsHandler{
cfg: cfg,
idb: idb,
queryApi: queryApi,
}
}
func (h *getShareMetricsHandler) Handle(params metadata.GetShareMetricsParams, principal *rest_model_zrok.Principal) middleware.Responder {
trx, err := str.Begin()
if err != nil {
logrus.Errorf("error starting transaction: %v", err)
return metadata.NewGetEnvironmentMetricsInternalServerError()
}
defer func() { _ = trx.Rollback() }()
shr, err := str.FindShareWithToken(params.ShrToken, trx)
if err != nil {
logrus.Errorf("error finding share '%v' for '%v': %v", params.ShrToken, principal.Email, err)
return metadata.NewGetShareMetricsUnauthorized()
}
env, err := str.GetEnvironment(shr.EnvironmentId, trx)
if err != nil {
logrus.Errorf("error finding environment '%d' for '%v': %v", shr.EnvironmentId, principal.Email, err)
return metadata.NewGetShareMetricsUnauthorized()
}
if env.AccountId != nil && int64(*env.AccountId) != principal.ID {
logrus.Errorf("user '%v' does not own share '%v'", principal.Email, params.ShrToken)
return metadata.NewGetShareMetricsUnauthorized()
}
duration := 30 * 24 * time.Hour
if params.Duration != nil {
v, err := time.ParseDuration(*params.Duration)
if err != nil {
logrus.Errorf("bad duration '%v' for '%v': %v", *params.Duration, principal.Email, err)
return metadata.NewGetAccountMetricsBadRequest()
}
duration = v
}
slice := sliceSize(duration)
query := fmt.Sprintf("from(bucket: \"%v\")\n", h.cfg.Bucket) +
fmt.Sprintf("|> range(start: -%v)\n", duration) +
"|> filter(fn: (r) => r[\"_measurement\"] == \"xfer\")\n" +
"|> filter(fn: (r) => r[\"_field\"] == \"rx\" or r[\"_field\"] == \"tx\")\n" +
"|> filter(fn: (r) => r[\"namespace\"] == \"backend\")\n" +
fmt.Sprintf("|> filter(fn: (r) => r[\"share\"] == \"%v\")\n", shr.Token) +
fmt.Sprintf("|> aggregateWindow(every: %v, fn: sum, createEmpty: true)", slice)
rx, tx, timestamps, err := runFluxForRxTxArray(query, h.queryApi)
if err != nil {
logrus.Errorf("error running account metrics query for '%v': %v", principal.Email, err)
return metadata.NewGetAccountMetricsInternalServerError()
}
response := &rest_model_zrok.Metrics{
Scope: "account",
ID: fmt.Sprintf("%d", principal.ID),
Period: duration.Seconds(),
}
for i := 0; i < len(rx) && i < len(tx) && i < len(timestamps); i++ {
response.Samples = append(response.Samples, &rest_model_zrok.MetricsSample{
Rx: rx[i],
Tx: tx[i],
Timestamp: timestamps[i],
})
}
return metadata.NewGetShareMetricsOK().WithPayload(response)
}
func runFluxForRxTxArray(query string, queryApi api.QueryAPI) (rx, tx, timestamps []float64, err error) {
result, err := queryApi.Query(context.Background(), query)
if err != nil {
return nil, nil, nil, err
}
for result.Next() {
switch result.Record().Field() {
case "rx":
rxV := int64(0)
if v, ok := result.Record().Value().(int64); ok {
rxV = v
}
rx = append(rx, float64(rxV))
timestamps = append(timestamps, float64(result.Record().Time().UnixMilli()))
case "tx":
txV := int64(0)
if v, ok := result.Record().Value().(int64); ok {
txV = v
}
tx = append(tx, float64(txV))
}
}
return rx, tx, timestamps, nil
}
func sliceSize(duration time.Duration) time.Duration {
switch duration {
case 30 * 24 * time.Hour:
return 24 * time.Hour
case 7 * 24 * time.Hour:
return 4 * time.Hour
case 24 * time.Hour:
return 30 * time.Minute
default:
return duration
}
}

View File

@ -1,58 +1,65 @@
package metrics
import (
"github.com/openziti/zrok/controller/store"
"github.com/pkg/errors"
"github.com/sirupsen/logrus"
)
type MetricsAgent struct {
src Source
cache *shareCache
join chan struct{}
type Agent struct {
events chan ZitiEventMsg
src ZitiEventJsonSource
srcJoin chan struct{}
cache *cache
snks []UsageSink
}
func Run(cfg *Config) (*MetricsAgent, error) {
logrus.Info("starting")
if cfg.Store == nil {
return nil, errors.New("no 'store' configured; exiting")
func NewAgent(cfg *AgentConfig, str *store.Store, ifxCfg *InfluxConfig) (*Agent, error) {
a := &Agent{}
if v, ok := cfg.Source.(ZitiEventJsonSource); ok {
a.src = v
} else {
return nil, errors.New("invalid event json source")
}
cache, err := newShareCache(cfg.Store)
a.cache = newShareCache(str)
a.snks = append(a.snks, newInfluxWriter(ifxCfg))
return a, nil
}
func (a *Agent) AddUsageSink(snk UsageSink) {
a.snks = append(a.snks, snk)
}
func (a *Agent) Start() error {
a.events = make(chan ZitiEventMsg)
srcJoin, err := a.src.Start(a.events)
if err != nil {
return nil, errors.Wrap(err, "error creating share cache")
}
if cfg.Source == nil {
return nil, errors.New("no 'source' configured; exiting")
}
src, ok := cfg.Source.(Source)
if !ok {
return nil, errors.New("invalid 'source'; exiting")
}
if cfg.Influx == nil {
return nil, errors.New("no 'influx' configured; exiting")
}
idb := openInfluxDb(cfg.Influx)
events := make(chan map[string]interface{})
join, err := src.Start(events)
if err != nil {
return nil, errors.Wrap(err, "error starting source")
return err
}
a.srcJoin = srcJoin
go func() {
logrus.Info("started")
defer logrus.Info("stopped")
for {
select {
case event := <-events:
usage := Ingest(event)
if shrToken, err := cache.getToken(usage.ZitiServiceId); err == nil {
usage.ShareToken = shrToken
if err := idb.Write(usage); err != nil {
case event := <-a.events:
if usage, err := Ingest(event.Data()); err == nil {
if err := a.cache.addZrokDetail(usage); err != nil {
logrus.Error(err)
}
shouldAck := true
for _, snk := range a.snks {
if err := snk.Handle(usage); err != nil {
logrus.Error(err)
if shouldAck {
shouldAck = false
}
}
}
if shouldAck {
event.Ack()
}
} else {
logrus.Error(err)
}
@ -60,14 +67,10 @@ func Run(cfg *Config) (*MetricsAgent, error) {
}
}()
return &MetricsAgent{src: src, join: join}, nil
return nil
}
func (ma *MetricsAgent) Stop() {
logrus.Info("stopping")
ma.src.Stop()
}
func (ma *MetricsAgent) Join() {
<-ma.join
func (a *Agent) Stop() {
a.src.Stop()
close(a.events)
}

View File

@ -0,0 +1,66 @@
package metrics
import (
"context"
"github.com/michaelquigley/cf"
"github.com/openziti/zrok/controller/env"
"github.com/pkg/errors"
amqp "github.com/rabbitmq/amqp091-go"
"github.com/sirupsen/logrus"
"time"
)
func init() {
env.GetCfOptions().AddFlexibleSetter("amqpSink", loadAmqpSinkConfig)
}
type AmqpSinkConfig struct {
Url string `cf:"+secret"`
QueueName string
}
func loadAmqpSinkConfig(v interface{}, _ *cf.Options) (interface{}, error) {
if submap, ok := v.(map[string]interface{}); ok {
cfg := &AmqpSinkConfig{}
if err := cf.Bind(cfg, submap, cf.DefaultOptions()); err != nil {
return nil, err
}
return newAmqpSink(cfg)
}
return nil, errors.New("invalid config structure for 'amqpSink'")
}
type amqpSink struct {
conn *amqp.Connection
ch *amqp.Channel
queue amqp.Queue
}
func newAmqpSink(cfg *AmqpSinkConfig) (*amqpSink, error) {
conn, err := amqp.Dial(cfg.Url)
if err != nil {
return nil, errors.Wrap(err, "error dialing amqp broker")
}
ch, err := conn.Channel()
if err != nil {
return nil, errors.Wrap(err, "error getting amqp channel")
}
queue, err := ch.QueueDeclare(cfg.QueueName, true, false, false, false, nil)
if err != nil {
return nil, errors.Wrap(err, "error declaring queue")
}
return &amqpSink{conn, ch, queue}, nil
}
func (s *amqpSink) Handle(event ZitiEventJson) error {
ctx, cancel := context.WithTimeout(context.Background(), 5*time.Second)
defer cancel()
logrus.Infof("pushing '%v'", event)
return s.ch.PublishWithContext(ctx, "", s.queue.Name, false, false, amqp.Publishing{
ContentType: "application/json",
Body: []byte(event),
})
}

View File

@ -0,0 +1,89 @@
package metrics
import (
"github.com/michaelquigley/cf"
"github.com/openziti/zrok/controller/env"
"github.com/pkg/errors"
amqp "github.com/rabbitmq/amqp091-go"
"github.com/sirupsen/logrus"
)
func init() {
env.GetCfOptions().AddFlexibleSetter("amqpSource", loadAmqpSourceConfig)
}
type AmqpSourceConfig struct {
Url string `cf:"+secret"`
QueueName string
}
func loadAmqpSourceConfig(v interface{}, _ *cf.Options) (interface{}, error) {
if submap, ok := v.(map[string]interface{}); ok {
cfg := &AmqpSourceConfig{}
if err := cf.Bind(cfg, submap, cf.DefaultOptions()); err != nil {
return nil, err
}
return newAmqpSource(cfg)
}
return nil, errors.New("invalid config structure for 'amqpSource'")
}
type amqpSource struct {
conn *amqp.Connection
ch *amqp.Channel
queue amqp.Queue
msgs <-chan amqp.Delivery
join chan struct{}
}
func newAmqpSource(cfg *AmqpSourceConfig) (*amqpSource, error) {
conn, err := amqp.Dial(cfg.Url)
if err != nil {
return nil, errors.Wrap(err, "error dialing amqp broker")
}
ch, err := conn.Channel()
if err != nil {
return nil, errors.Wrap(err, "error getting amqp channel")
}
queue, err := ch.QueueDeclare(cfg.QueueName, true, false, false, false, nil)
if err != nil {
return nil, errors.Wrap(err, "error declaring queue")
}
msgs, err := ch.Consume(cfg.QueueName, "zrok", false, false, false, false, nil)
if err != nil {
return nil, errors.Wrap(err, "error consuming")
}
return &amqpSource{
conn,
ch,
queue,
msgs,
make(chan struct{}),
}, nil
}
func (s *amqpSource) Start(events chan ZitiEventMsg) (join chan struct{}, err error) {
go func() {
logrus.Info("started")
defer logrus.Info("stopped")
for event := range s.msgs {
events <- &ZitiEventAMQP{
data: ZitiEventJson(event.Body),
msg: &event,
}
}
close(s.join)
}()
return s.join, nil
}
func (s *amqpSource) Stop() {
if err := s.ch.Close(); err != nil {
logrus.Error(err)
}
<-s.join
}

View File

@ -0,0 +1,78 @@
package metrics
import (
"github.com/pkg/errors"
"github.com/sirupsen/logrus"
)
type BridgeConfig struct {
Source interface{}
Sink interface{}
}
type Bridge struct {
src ZitiEventJsonSource
srcJoin chan struct{}
snk ZitiEventJsonSink
events chan ZitiEventMsg
close chan struct{}
join chan struct{}
}
func NewBridge(cfg *BridgeConfig) (*Bridge, error) {
b := &Bridge{
events: make(chan ZitiEventMsg),
join: make(chan struct{}),
close: make(chan struct{}),
}
if v, ok := cfg.Source.(ZitiEventJsonSource); ok {
b.src = v
} else {
return nil, errors.New("invalid source type")
}
if v, ok := cfg.Sink.(ZitiEventJsonSink); ok {
b.snk = v
} else {
return nil, errors.New("invalid sink type")
}
return b, nil
}
func (b *Bridge) Start() (join chan struct{}, err error) {
if b.srcJoin, err = b.src.Start(b.events); err != nil {
return nil, err
}
go func() {
logrus.Info("started")
defer logrus.Info("stopped")
defer close(b.join)
eventLoop:
for {
select {
case eventJson := <-b.events:
logrus.Info(eventJson)
if err := b.snk.Handle(eventJson.Data()); err == nil {
logrus.Infof("-> %v", eventJson.Data())
} else {
logrus.Error(err)
}
eventJson.Ack()
case <-b.close:
logrus.Info("received close signal")
break eventLoop
}
}
}()
return b.join, nil
}
func (b *Bridge) Stop() {
b.src.Stop()
close(b.close)
<-b.srcJoin
<-b.join
}

View File

@ -0,0 +1,35 @@
package metrics
import (
"github.com/openziti/zrok/controller/store"
)
type cache struct {
str *store.Store
}
func newShareCache(str *store.Store) *cache {
return &cache{str}
}
func (c *cache) addZrokDetail(u *Usage) error {
tx, err := c.str.Begin()
if err != nil {
return err
}
defer func() { _ = tx.Rollback() }()
shr, err := c.str.FindShareWithZIdAndDeleted(u.ZitiServiceId, tx)
if err != nil {
return err
}
u.ShareToken = shr.Token
env, err := c.str.GetEnvironment(shr.EnvironmentId, tx)
if err != nil {
return err
}
u.EnvironmentId = int64(env.Id)
u.AccountId = int64(*env.AccountId)
return nil
}

View File

@ -1,10 +0,0 @@
package metrics
import "github.com/michaelquigley/cf"
func GetCfOptions() *cf.Options {
opts := cf.DefaultOptions()
opts.AddFlexibleSetter("file", loadFileSourceConfig)
opts.AddFlexibleSetter("websocket", loadWebsocketSourceConfig)
return opts
}

View File

@ -1,15 +1,12 @@
package metrics
import (
"github.com/michaelquigley/cf"
"github.com/openziti/zrok/controller/store"
"github.com/pkg/errors"
)
type Config struct {
Source interface{}
Influx *InfluxConfig
Store *store.Config
Agent *AgentConfig
}
type AgentConfig struct {
Source interface{}
}
type InfluxConfig struct {
@ -18,11 +15,3 @@ type InfluxConfig struct {
Org string
Token string `cf:"+secret"`
}
func LoadConfig(path string) (*Config, error) {
cfg := &Config{}
if err := cf.BindYaml(cfg, path, GetCfOptions()); err != nil {
return nil, errors.Wrapf(err, "error loading config from '%v'", path)
}
return cfg, nil
}

View File

@ -2,17 +2,22 @@ package metrics
import (
"encoding/binary"
"encoding/json"
"os"
"github.com/michaelquigley/cf"
"github.com/nxadm/tail"
"github.com/openziti/zrok/controller/env"
"github.com/pkg/errors"
"github.com/sirupsen/logrus"
"os"
)
func init() {
env.GetCfOptions().AddFlexibleSetter("fileSource", loadFileSourceConfig)
}
type FileSourceConfig struct {
Path string
IndexPath string
Path string
PointerPath string
}
func loadFileSourceConfig(v interface{}, _ *cf.Options) (interface{}, error) {
@ -23,36 +28,36 @@ func loadFileSourceConfig(v interface{}, _ *cf.Options) (interface{}, error) {
}
return &fileSource{cfg: cfg}, nil
}
return nil, errors.New("invalid config structure for 'file' source")
return nil, errors.New("invalid config structure for 'fileSource'")
}
type fileSource struct {
cfg *FileSourceConfig
t *tail.Tail
cfg *FileSourceConfig
ptrF *os.File
t *tail.Tail
}
func (s *fileSource) Start(events chan map[string]interface{}) (join chan struct{}, err error) {
func (s *fileSource) Start(events chan ZitiEventMsg) (join chan struct{}, err error) {
f, err := os.Open(s.cfg.Path)
if err != nil {
return nil, errors.Wrapf(err, "error opening '%v'", s.cfg.Path)
}
_ = f.Close()
idxF, err := os.OpenFile(s.indexPath(), os.O_CREATE|os.O_RDWR, os.ModePerm)
s.ptrF, err = os.OpenFile(s.pointerPath(), os.O_CREATE|os.O_RDWR, os.ModePerm)
if err != nil {
return nil, errors.Wrapf(err, "error opening '%v'", s.indexPath())
return nil, errors.Wrapf(err, "error opening pointer '%v'", s.pointerPath())
}
pos := int64(0)
posBuf := make([]byte, 8)
if n, err := idxF.Read(posBuf); err == nil && n == 8 {
pos = int64(binary.LittleEndian.Uint64(posBuf))
logrus.Infof("recovered stored position: %d", pos)
ptr, err := s.readPtr()
if err != nil {
logrus.Errorf("error reading pointer: %v", err)
}
logrus.Infof("retrieved stored position pointer at '%d'", ptr)
join = make(chan struct{})
go func() {
s.tail(pos, events, idxF)
s.tail(ptr, events)
close(join)
}()
@ -65,43 +70,64 @@ func (s *fileSource) Stop() {
}
}
func (s *fileSource) tail(pos int64, events chan map[string]interface{}, idxF *os.File) {
logrus.Infof("started")
defer logrus.Infof("stopped")
posBuf := make([]byte, 8)
func (s *fileSource) tail(ptr int64, events chan ZitiEventMsg) {
logrus.Info("started")
defer logrus.Info("stopped")
var err error
s.t, err = tail.TailFile(s.cfg.Path, tail.Config{
ReOpen: true,
Follow: true,
Location: &tail.SeekInfo{Offset: pos},
Location: &tail.SeekInfo{Offset: ptr},
})
if err != nil {
logrus.Error(err)
logrus.Errorf("error starting tail: %v", err)
return
}
for line := range s.t.Lines {
event := make(map[string]interface{})
if err := json.Unmarshal([]byte(line.Text), &event); err == nil {
binary.LittleEndian.PutUint64(posBuf, uint64(line.SeekInfo.Offset))
if n, err := idxF.Seek(0, 0); err == nil && n == 0 {
if n, err := idxF.Write(posBuf); err != nil || n != 8 {
logrus.Errorf("error writing index (%d): %v", n, err)
}
}
events <- event
} else {
logrus.Errorf("error parsing line #%d: %v", line.Num, err)
for event := range s.t.Lines {
events <- &ZitiEventJsonMsg{
data: ZitiEventJson(event.Text),
}
if err := s.writePtr(event.SeekInfo.Offset); err != nil {
logrus.Error(err)
}
}
}
func (s *fileSource) indexPath() string {
if s.cfg.IndexPath == "" {
return s.cfg.Path + ".idx"
func (s *fileSource) pointerPath() string {
if s.cfg.PointerPath == "" {
return s.cfg.Path + ".ptr"
} else {
return s.cfg.IndexPath
return s.cfg.PointerPath
}
}
func (s *fileSource) readPtr() (int64, error) {
ptr := int64(0)
buf := make([]byte, 8)
if n, err := s.ptrF.Seek(0, 0); err == nil && n == 0 {
if n, err := s.ptrF.Read(buf); err == nil && n == 8 {
ptr = int64(binary.LittleEndian.Uint64(buf))
return ptr, nil
} else {
return 0, errors.Wrapf(err, "error reading pointer (%d): %v", n, err)
}
} else {
return 0, errors.Wrapf(err, "error seeking pointer (%d): %v", n, err)
}
}
func (s *fileSource) writePtr(ptr int64) error {
buf := make([]byte, 8)
binary.LittleEndian.PutUint64(buf, uint64(ptr))
if n, err := s.ptrF.Seek(0, 0); err == nil && n == 0 {
if n, err := s.ptrF.Write(buf); err != nil || n != 8 {
return errors.Wrapf(err, "error writing pointer (%d): %v", n, err)
}
} else {
return errors.Wrapf(err, "error seeking pointer (%d): %v", n, err)
}
return nil
}

View File

@ -10,42 +10,52 @@ import (
"github.com/sirupsen/logrus"
)
type influxDb struct {
type influxWriter struct {
idb influxdb2.Client
writeApi api.WriteAPIBlocking
}
func openInfluxDb(cfg *InfluxConfig) *influxDb {
func newInfluxWriter(cfg *InfluxConfig) *influxWriter {
idb := influxdb2.NewClient(cfg.Url, cfg.Token)
wapi := idb.WriteAPIBlocking(cfg.Org, cfg.Bucket)
return &influxDb{idb, wapi}
writeApi := idb.WriteAPIBlocking(cfg.Org, cfg.Bucket)
return &influxWriter{idb, writeApi}
}
func (i *influxDb) Write(u *Usage) error {
func (w *influxWriter) Handle(u *Usage) error {
out := fmt.Sprintf("share: %v, circuit: %v", u.ShareToken, u.ZitiCircuitId)
envId := fmt.Sprintf("%d", u.EnvironmentId)
acctId := fmt.Sprintf("%d", u.AccountId)
var pts []*write.Point
circuitPt := influxdb2.NewPoint("circuits",
map[string]string{"share": u.ShareToken, "envId": envId, "acctId": acctId},
map[string]interface{}{"circuit": u.ZitiCircuitId},
u.IntervalStart)
pts = append(pts, circuitPt)
if u.BackendTx > 0 || u.BackendRx > 0 {
pt := influxdb2.NewPoint("xfer",
map[string]string{"namespace": "backend", "share": u.ShareToken},
map[string]interface{}{"bytesRead": u.BackendRx, "bytesWritten": u.BackendTx},
map[string]string{"namespace": "backend", "share": u.ShareToken, "envId": envId, "acctId": acctId},
map[string]interface{}{"rx": u.BackendRx, "tx": u.BackendTx},
u.IntervalStart)
pts = append(pts, pt)
out += fmt.Sprintf(" backend {rx: %v, tx: %v}", util.BytesToSize(u.BackendRx), util.BytesToSize(u.BackendTx))
}
if u.FrontendTx > 0 || u.FrontendRx > 0 {
pt := influxdb2.NewPoint("xfer",
map[string]string{"namespace": "frontend", "share": u.ShareToken},
map[string]interface{}{"bytesRead": u.FrontendRx, "bytesWritten": u.FrontendTx},
map[string]string{"namespace": "frontend", "share": u.ShareToken, "envId": envId, "acctId": acctId},
map[string]interface{}{"rx": u.FrontendRx, "tx": u.FrontendTx},
u.IntervalStart)
pts = append(pts, pt)
out += fmt.Sprintf(" frontend {rx: %v, tx: %v}", util.BytesToSize(u.FrontendRx), util.BytesToSize(u.FrontendTx))
}
if len(pts) > 0 {
if err := i.writeApi.WritePoint(context.Background(), pts...); err == nil {
logrus.Info(out)
} else {
return err
}
if err := w.writeApi.WritePoint(context.Background(), pts...); err == nil {
logrus.Info(out)
} else {
return err
}
return nil
}

View File

@ -2,8 +2,11 @@ package metrics
import (
"fmt"
"github.com/openziti/zrok/util"
"time"
"github.com/openziti/zrok/util"
"github.com/pkg/errors"
amqp "github.com/rabbitmq/amqp091-go"
)
type Usage struct {
@ -12,6 +15,8 @@ type Usage struct {
ZitiServiceId string
ZitiCircuitId string
ShareToken string
EnvironmentId int64
AccountId int64
FrontendTx int64
FrontendRx int64
BackendTx int64
@ -25,17 +30,58 @@ func (u Usage) String() string {
out += ", " + fmt.Sprintf("service '%v'", u.ZitiServiceId)
out += ", " + fmt.Sprintf("circuit '%v'", u.ZitiCircuitId)
out += ", " + fmt.Sprintf("share '%v'", u.ShareToken)
out += ", " + fmt.Sprintf("environment '%d'", u.EnvironmentId)
out += ", " + fmt.Sprintf("account '%v'", u.AccountId)
out += ", " + fmt.Sprintf("fe {rx %v, tx %v}", util.BytesToSize(u.FrontendRx), util.BytesToSize(u.FrontendTx))
out += ", " + fmt.Sprintf("be {rx %v, tx %v}", util.BytesToSize(u.BackendRx), util.BytesToSize(u.BackendTx))
out += "}"
return out
}
type Source interface {
Start(chan map[string]interface{}) (chan struct{}, error)
type UsageSink interface {
Handle(u *Usage) error
}
type ZitiEventJson string
type ZitiEventJsonMsg struct {
data ZitiEventJson
}
func (e *ZitiEventJsonMsg) Data() ZitiEventJson {
return e.data
}
func (e *ZitiEventJsonMsg) Ack() error {
return nil
}
type ZitiEventAMQP struct {
data ZitiEventJson
msg *amqp.Delivery
}
func (e *ZitiEventAMQP) Data() ZitiEventJson {
return e.data
}
func (e *ZitiEventAMQP) Ack() error {
if e.msg != nil {
return errors.New("Nil delivery message")
}
return e.msg.Ack(false)
}
type ZitiEventMsg interface {
Data() ZitiEventJson
Ack() error
}
type ZitiEventJsonSource interface {
Start(chan ZitiEventMsg) (join chan struct{}, err error)
Stop()
}
type Ingester interface {
Ingest(msg map[string]interface{}) error
type ZitiEventJsonSink interface {
Handle(event ZitiEventJson) error
}

View File

@ -1,31 +0,0 @@
package metrics
import (
"github.com/openziti/zrok/controller/store"
"github.com/pkg/errors"
)
type shareCache struct {
str *store.Store
}
func newShareCache(cfg *store.Config) (*shareCache, error) {
str, err := store.Open(cfg)
if err != nil {
return nil, errors.Wrap(err, "error opening store")
}
return &shareCache{str}, nil
}
func (sc *shareCache) getToken(svcZId string) (string, error) {
tx, err := sc.str.Begin()
if err != nil {
return "", err
}
defer func() { _ = tx.Rollback() }()
shr, err := sc.str.FindShareWithZIdAndDeleted(svcZId, tx)
if err != nil {
return "", err
}
return shr.Token, nil
}

View File

@ -0,0 +1,94 @@
package metrics
import (
"encoding/json"
"github.com/pkg/errors"
"github.com/sirupsen/logrus"
"reflect"
"time"
)
func Ingest(event ZitiEventJson) (*Usage, error) {
eventMap := make(map[string]interface{})
if err := json.Unmarshal([]byte(event), &eventMap); err == nil {
u := &Usage{ProcessedStamp: time.Now()}
if ns, found := eventMap["namespace"]; found && ns == "fabric.usage" {
if v, found := eventMap["interval_start_utc"]; found {
if vFloat64, ok := v.(float64); ok {
u.IntervalStart = time.Unix(int64(vFloat64), 0)
} else {
logrus.Error("unable to assert 'interval_start_utc'")
}
} else {
logrus.Error("missing 'interval_start_utc'")
}
if v, found := eventMap["tags"]; found {
if tags, ok := v.(map[string]interface{}); ok {
if v, found := tags["serviceId"]; found {
if vStr, ok := v.(string); ok {
u.ZitiServiceId = vStr
} else {
logrus.Error("unable to assert 'tags/serviceId'")
}
} else {
logrus.Error("missing 'tags/serviceId'")
}
} else {
logrus.Errorf("unable to assert 'tags'")
}
} else {
logrus.Errorf("missing 'tags'")
}
if v, found := eventMap["usage"]; found {
if usage, ok := v.(map[string]interface{}); ok {
if v, found := usage["ingress.tx"]; found {
if vFloat64, ok := v.(float64); ok {
u.FrontendTx = int64(vFloat64)
} else {
logrus.Error("unable to assert 'usage/ingress.tx'")
}
}
if v, found := usage["ingress.rx"]; found {
if vFloat64, ok := v.(float64); ok {
u.FrontendRx = int64(vFloat64)
} else {
logrus.Error("unable to assert 'usage/ingress.rx")
}
}
if v, found := usage["egress.tx"]; found {
if vFloat64, ok := v.(float64); ok {
u.BackendTx = int64(vFloat64)
} else {
logrus.Error("unable to assert 'usage/egress.tx'")
}
}
if v, found := usage["egress.rx"]; found {
if vFloat64, ok := v.(float64); ok {
u.BackendRx = int64(vFloat64)
} else {
logrus.Error("unable to assert 'usage/egress.rx'")
}
}
} else {
logrus.Errorf("unable to assert 'usage' (%v) %v", reflect.TypeOf(v), event)
}
} else {
logrus.Warnf("missing 'usage'")
}
if v, found := eventMap["circuit_id"]; found {
if vStr, ok := v.(string); ok {
u.ZitiCircuitId = vStr
} else {
logrus.Error("unable to assert 'circuit_id'")
}
} else {
logrus.Warn("missing 'circuit_id'")
}
} else {
logrus.Errorf("not 'fabric.usage'")
}
return u, nil
} else {
return nil, errors.Wrap(err, "error unmarshaling")
}
}

View File

@ -1,95 +0,0 @@
package metrics
import (
"github.com/sirupsen/logrus"
"reflect"
"time"
)
func Ingest(event map[string]interface{}) *Usage {
u := &Usage{ProcessedStamp: time.Now()}
if ns, found := event["namespace"]; found && ns == "fabric.usage" {
if v, found := event["interval_start_utc"]; found {
if vFloat64, ok := v.(float64); ok {
u.IntervalStart = time.Unix(int64(vFloat64), 0)
} else {
logrus.Error("unable to assert 'interval_start_utc'")
}
} else {
logrus.Error("missing 'interval_start_utc'")
}
if v, found := event["tags"]; found {
if tags, ok := v.(map[string]interface{}); ok {
if v, found := tags["serviceId"]; found {
if vStr, ok := v.(string); ok {
u.ZitiServiceId = vStr
} else {
logrus.Error("unable to assert 'tags/serviceId'")
}
} else {
logrus.Error("missing 'tags/serviceId'")
}
} else {
logrus.Errorf("unable to assert 'tags'")
}
} else {
logrus.Errorf("missing 'tags'")
}
if v, found := event["usage"]; found {
if usage, ok := v.(map[string]interface{}); ok {
if v, found := usage["ingress.tx"]; found {
if vFloat64, ok := v.(float64); ok {
u.FrontendTx = int64(vFloat64)
} else {
logrus.Error("unable to assert 'usage/ingress.tx'")
}
} else {
logrus.Warn("missing 'usage/ingress.tx'")
}
if v, found := usage["ingress.rx"]; found {
if vFloat64, ok := v.(float64); ok {
u.FrontendRx = int64(vFloat64)
} else {
logrus.Error("unable to assert 'usage/ingress.rx")
}
} else {
logrus.Warn("missing 'usage/ingress.rx")
}
if v, found := usage["egress.tx"]; found {
if vFloat64, ok := v.(float64); ok {
u.BackendTx = int64(vFloat64)
} else {
logrus.Error("unable to assert 'usage/egress.tx'")
}
} else {
logrus.Warn("missing 'usage/egress.tx'")
}
if v, found := usage["egress.rx"]; found {
if vFloat64, ok := v.(float64); ok {
u.BackendRx = int64(vFloat64)
} else {
logrus.Error("unable to assert 'usage/egress.rx'")
}
} else {
logrus.Warn("missing 'usage/egress.rx'")
}
} else {
logrus.Errorf("unable to assert 'usage' (%v) %v", reflect.TypeOf(v), event)
}
} else {
logrus.Warnf("missing 'usage'")
}
if v, found := event["circuit_id"]; found {
if vStr, ok := v.(string); ok {
u.ZitiCircuitId = vStr
} else {
logrus.Error("unable to assert 'circuit_id'")
}
} else {
logrus.Warn("missing 'circuit_id'")
}
} else {
logrus.Errorf("not 'fabric.usage'")
}
return u
}

View File

@ -1,10 +1,14 @@
package metrics
import (
"bytes"
"crypto/tls"
"crypto/x509"
"encoding/json"
"io"
"net/http"
"net/url"
"time"
"github.com/gorilla/websocket"
"github.com/michaelquigley/cf"
"github.com/openziti/channel/v2"
@ -14,19 +18,20 @@ import (
"github.com/openziti/fabric/pb/mgmt_pb"
"github.com/openziti/identity"
"github.com/openziti/sdk-golang/ziti/constants"
"github.com/openziti/zrok/controller/env"
"github.com/pkg/errors"
"github.com/sirupsen/logrus"
"io"
"net/http"
"net/url"
"time"
)
func init() {
env.GetCfOptions().AddFlexibleSetter("websocketSource", loadWebsocketSourceConfig)
}
type WebsocketSourceConfig struct {
WebsocketEndpoint string
ApiEndpoint string
WebsocketEndpoint string // wss://127.0.0.1:1280/fabric/v1/ws-api
ApiEndpoint string // https://127.0.0.1:1280
Username string
Password string
Password string `cf:"+secret"`
}
func loadWebsocketSourceConfig(v interface{}, _ *cf.Options) (interface{}, error) {
@ -37,17 +42,17 @@ func loadWebsocketSourceConfig(v interface{}, _ *cf.Options) (interface{}, error
}
return &websocketSource{cfg: cfg}, nil
}
return nil, errors.New("invalid config structure for 'websocket' source")
return nil, errors.New("invalid config structure for 'websocketSource'")
}
type websocketSource struct {
cfg *WebsocketSourceConfig
ch channel.Channel
events chan map[string]interface{}
events chan ZitiEventMsg
join chan struct{}
}
func (s *websocketSource) Start(events chan map[string]interface{}) (chan struct{}, error) {
func (s *websocketSource) Start(events chan ZitiEventMsg) (join chan struct{}, err error) {
caCerts, err := rest_util.GetControllerWellKnownCas(s.cfg.ApiEndpoint)
if err != nil {
return nil, err
@ -146,17 +151,7 @@ func (s *websocketSource) Stop() {
}
func (s *websocketSource) HandleReceive(msg *channel.Message, _ channel.Channel) {
decoder := json.NewDecoder(bytes.NewReader(msg.Body))
for {
ev := make(map[string]interface{})
err := decoder.Decode(&ev)
if err == io.EOF {
break
}
if err == nil {
s.events <- ev
} else {
logrus.Errorf("error parsing '%v': %v", string(msg.Body), err)
}
s.events <- &ZitiEventJsonMsg{
data: ZitiEventJson(msg.Body),
}
}

View File

@ -2,41 +2,63 @@ package controller
import (
"github.com/go-openapi/runtime/middleware"
"github.com/jmoiron/sqlx"
"github.com/openziti/zrok/controller/store"
"github.com/openziti/zrok/rest_model_zrok"
"github.com/openziti/zrok/rest_server_zrok/operations/metadata"
"github.com/sirupsen/logrus"
)
func overviewHandler(_ metadata.OverviewParams, principal *rest_model_zrok.Principal) middleware.Responder {
tx, err := str.Begin()
type overviewHandler struct{}
func newOverviewHandler() *overviewHandler {
return &overviewHandler{}
}
func (h *overviewHandler) Handle(_ metadata.OverviewParams, principal *rest_model_zrok.Principal) middleware.Responder {
trx, err := str.Begin()
if err != nil {
logrus.Errorf("error starting transaction: %v", err)
return metadata.NewOverviewInternalServerError()
}
defer func() { _ = tx.Rollback() }()
envs, err := str.FindEnvironmentsForAccount(int(principal.ID), tx)
defer func() { _ = trx.Rollback() }()
envs, err := str.FindEnvironmentsForAccount(int(principal.ID), trx)
if err != nil {
logrus.Errorf("error finding environments for '%v': %v", principal.Email, err)
return metadata.NewOverviewInternalServerError()
}
var out rest_model_zrok.EnvironmentSharesList
elm, err := newEnvironmentsLimitedMap(envs, trx)
if err != nil {
logrus.Errorf("error finding limited environments for '%v': %v", principal.Email, err)
return metadata.NewOverviewInternalServerError()
}
accountLimited, err := h.isAccountLimited(principal, trx)
if err != nil {
logrus.Errorf("error checking account limited for '%v': %v", principal.Email, err)
}
ovr := &rest_model_zrok.Overview{AccountLimited: accountLimited}
for _, env := range envs {
shrs, err := str.FindSharesForEnvironment(env.Id, tx)
envRes := &rest_model_zrok.EnvironmentAndResources{
Environment: &rest_model_zrok.Environment{
Address: env.Address,
Description: env.Description,
Host: env.Host,
ZID: env.ZId,
Limited: elm.isLimited(env),
CreatedAt: env.CreatedAt.UnixMilli(),
UpdatedAt: env.UpdatedAt.UnixMilli(),
},
}
shrs, err := str.FindSharesForEnvironment(env.Id, trx)
if err != nil {
logrus.Errorf("error finding shares for environment '%v': %v", env.ZId, err)
return metadata.NewOverviewInternalServerError()
}
es := &rest_model_zrok.EnvironmentShares{
Environment: &rest_model_zrok.Environment{
Address: env.Address,
CreatedAt: env.CreatedAt.UnixMilli(),
Description: env.Description,
Host: env.Host,
UpdatedAt: env.UpdatedAt.UnixMilli(),
ZID: env.ZId,
},
slm, err := newSharesLimitedMap(shrs, trx)
if err != nil {
logrus.Errorf("error finding limited shares for environment '%v': %v", env.ZId, err)
return metadata.NewOverviewInternalServerError()
}
for _, shr := range shrs {
feEndpoint := ""
if shr.FrontendEndpoint != nil {
@ -50,7 +72,7 @@ func overviewHandler(_ metadata.OverviewParams, principal *rest_model_zrok.Princ
if shr.BackendProxyEndpoint != nil {
beProxyEndpoint = *shr.BackendProxyEndpoint
}
es.Shares = append(es.Shares, &rest_model_zrok.Share{
envShr := &rest_model_zrok.Share{
Token: shr.Token,
ZID: shr.ZId,
ShareMode: shr.ShareMode,
@ -59,11 +81,104 @@ func overviewHandler(_ metadata.OverviewParams, principal *rest_model_zrok.Princ
FrontendEndpoint: feEndpoint,
BackendProxyEndpoint: beProxyEndpoint,
Reserved: shr.Reserved,
Limited: slm.isLimited(shr),
CreatedAt: shr.CreatedAt.UnixMilli(),
UpdatedAt: shr.UpdatedAt.UnixMilli(),
})
}
envRes.Shares = append(envRes.Shares, envShr)
}
out = append(out, es)
fes, err := str.FindFrontendsForEnvironment(env.Id, trx)
if err != nil {
logrus.Errorf("error finding frontends for environment '%v': %v", env.ZId, err)
return metadata.NewOverviewInternalServerError()
}
for _, fe := range fes {
envFe := &rest_model_zrok.Frontend{
ID: int64(fe.Id),
ZID: fe.ZId,
CreatedAt: fe.CreatedAt.UnixMilli(),
UpdatedAt: fe.UpdatedAt.UnixMilli(),
}
if fe.PrivateShareId != nil {
feShr, err := str.GetShare(*fe.PrivateShareId, trx)
if err != nil {
logrus.Errorf("error getting share for frontend '%v': %v", fe.ZId, err)
return metadata.NewOverviewInternalServerError()
}
envFe.ShrToken = feShr.Token
}
envRes.Frontends = append(envRes.Frontends, envFe)
}
ovr.Environments = append(ovr.Environments, envRes)
}
return metadata.NewOverviewOK().WithPayload(out)
return metadata.NewOverviewOK().WithPayload(ovr)
}
func (h *overviewHandler) isAccountLimited(principal *rest_model_zrok.Principal, trx *sqlx.Tx) (bool, error) {
var alj *store.AccountLimitJournal
aljEmpty, err := str.IsAccountLimitJournalEmpty(int(principal.ID), trx)
if err != nil {
return false, err
}
if !aljEmpty {
alj, err = str.FindLatestAccountLimitJournal(int(principal.ID), trx)
if err != nil {
return false, err
}
}
return alj != nil && alj.Action == store.LimitAction, nil
}
type sharesLimitedMap struct {
v map[int]struct{}
}
func newSharesLimitedMap(shrs []*store.Share, trx *sqlx.Tx) (*sharesLimitedMap, error) {
var shrIds []int
for i := range shrs {
shrIds = append(shrIds, shrs[i].Id)
}
shrsLimited, err := str.FindSelectedLatestShareLimitjournal(shrIds, trx)
if err != nil {
return nil, err
}
slm := &sharesLimitedMap{v: make(map[int]struct{})}
for i := range shrsLimited {
if shrsLimited[i].Action == store.LimitAction {
slm.v[shrsLimited[i].ShareId] = struct{}{}
}
}
return slm, nil
}
func (m *sharesLimitedMap) isLimited(shr *store.Share) bool {
_, limited := m.v[shr.Id]
return limited
}
type environmentsLimitedMap struct {
v map[int]struct{}
}
func newEnvironmentsLimitedMap(envs []*store.Environment, trx *sqlx.Tx) (*environmentsLimitedMap, error) {
var envIds []int
for i := range envs {
envIds = append(envIds, envs[i].Id)
}
envsLimited, err := str.FindSelectedLatestEnvironmentLimitJournal(envIds, trx)
if err != nil {
return nil, err
}
elm := &environmentsLimitedMap{v: make(map[int]struct{})}
for i := range envsLimited {
if envsLimited[i].Action == store.LimitAction {
elm.v[envsLimited[i].EnvironmentId] = struct{}{}
}
}
return elm, nil
}
func (m *environmentsLimitedMap) isLimited(env *store.Environment) bool {
_, limited := m.v[env.Id]
return limited
}

View File

@ -11,27 +11,25 @@ import (
"github.com/sirupsen/logrus"
)
type shareHandler struct {
cfg *LimitsConfig
}
type shareHandler struct{}
func newShareHandler(cfg *LimitsConfig) *shareHandler {
return &shareHandler{cfg: cfg}
func newShareHandler() *shareHandler {
return &shareHandler{}
}
func (h *shareHandler) Handle(params share.ShareParams, principal *rest_model_zrok.Principal) middleware.Responder {
logrus.Infof("handling")
logrus.Info("handling")
tx, err := str.Begin()
trx, err := str.Begin()
if err != nil {
logrus.Errorf("error starting transaction: %v", err)
return share.NewShareInternalServerError()
}
defer func() { _ = tx.Rollback() }()
defer func() { _ = trx.Rollback() }()
envZId := params.Body.EnvZID
envId := 0
envs, err := str.FindEnvironmentsForAccount(int(principal.ID), tx)
envs, err := str.FindEnvironmentsForAccount(int(principal.ID), trx)
if err == nil {
found := false
for _, env := range envs {
@ -51,7 +49,7 @@ func (h *shareHandler) Handle(params share.ShareParams, principal *rest_model_zr
return share.NewShareInternalServerError()
}
if err := h.checkLimits(principal, envs, tx); err != nil {
if err := h.checkLimits(envId, principal, trx); err != nil {
logrus.Errorf("limits error: %v", err)
return share.NewShareUnauthorized()
}
@ -79,7 +77,7 @@ func (h *shareHandler) Handle(params share.ShareParams, principal *rest_model_zr
var frontendZIds []string
var frontendTemplates []string
for _, frontendSelection := range params.Body.FrontendSelection {
sfe, err := str.FindFrontendPubliclyNamed(frontendSelection, tx)
sfe, err := str.FindFrontendPubliclyNamed(frontendSelection, trx)
if err != nil {
logrus.Error(err)
return share.NewShareNotFound()
@ -97,6 +95,7 @@ func (h *shareHandler) Handle(params share.ShareParams, principal *rest_model_zr
}
case "private":
logrus.Info("doing private")
shrZId, frontendEndpoints, err = newPrivateResourceAllocator().allocate(envZId, shrToken, params, edge)
if err != nil {
logrus.Error(err)
@ -119,19 +118,22 @@ func (h *shareHandler) Handle(params share.ShareParams, principal *rest_model_zr
BackendProxyEndpoint: &params.Body.BackendProxyEndpoint,
Reserved: reserved,
}
if len(params.Body.FrontendSelection) > 0 {
sshr.FrontendSelection = &params.Body.FrontendSelection[0]
}
if len(frontendEndpoints) > 0 {
sshr.FrontendEndpoint = &frontendEndpoints[0]
} else if sshr.ShareMode == "private" {
sshr.FrontendEndpoint = &sshr.ShareMode
}
sid, err := str.CreateShare(envId, sshr, tx)
sid, err := str.CreateShare(envId, sshr, trx)
if err != nil {
logrus.Errorf("error creating share record: %v", err)
return share.NewShareInternalServerError()
}
if err := tx.Commit(); err != nil {
if err := trx.Commit(); err != nil {
logrus.Errorf("error committing share record: %v", err)
return share.NewShareInternalServerError()
}
@ -143,17 +145,15 @@ func (h *shareHandler) Handle(params share.ShareParams, principal *rest_model_zr
})
}
func (h *shareHandler) checkLimits(principal *rest_model_zrok.Principal, envs []*store.Environment, tx *sqlx.Tx) error {
if !principal.Limitless && h.cfg.Shares > Unlimited {
total := 0
for i := range envs {
shrs, err := str.FindSharesForEnvironment(envs[i].Id, tx)
func (h *shareHandler) checkLimits(envId int, principal *rest_model_zrok.Principal, trx *sqlx.Tx) error {
if !principal.Limitless {
if limitsAgent != nil {
ok, err := limitsAgent.CanCreateShare(int(principal.ID), envId, trx)
if err != nil {
return errors.Errorf("unable to find shares for environment '%v': %v", envs[i].ZId, err)
return errors.Wrapf(err, "error checking share limits for '%v'", principal.Email)
}
total += len(shrs)
if total+1 > h.cfg.Shares {
return errors.Errorf("would exceed shares limit of %d for '%v'", h.cfg.Shares, principal.Email)
if !ok {
return errors.Errorf("share limit check failed for '%v'", principal.Email)
}
}
}

View File

@ -42,13 +42,15 @@ func (h *shareDetailHandler) Handle(params metadata.GetShareDetailParams, princi
logrus.Errorf("environment not matched for share '%v' for account '%v'", params.ShrToken, principal.Email)
return metadata.NewGetShareDetailNotFound()
}
var sparkData map[string][]int64
if cfg.Influx != nil {
sparkData, err = sparkDataForShares([]*store.Share{shr})
logrus.Info(sparkData)
sparkRx := make(map[string][]int64)
sparkTx := make(map[string][]int64)
if cfg.Metrics != nil && cfg.Metrics.Influx != nil {
sparkRx, sparkTx, err = sparkDataForShares([]*store.Share{shr})
if err != nil {
logrus.Errorf("error querying spark data for share: %v", err)
}
} else {
logrus.Debug("skipping spark data; no influx configuration")
}
feEndpoint := ""
if shr.FrontendEndpoint != nil {
@ -62,6 +64,10 @@ func (h *shareDetailHandler) Handle(params metadata.GetShareDetailParams, princi
if shr.BackendProxyEndpoint != nil {
beProxyEndpoint = *shr.BackendProxyEndpoint
}
var sparkData []*rest_model_zrok.SparkDataSample
for i := 0; i < len(sparkRx[shr.Token]) && i < len(sparkTx[shr.Token]); i++ {
sparkData = append(sparkData, &rest_model_zrok.SparkDataSample{Rx: float64(sparkRx[shr.Token][i]), Tx: float64(sparkTx[shr.Token][i])})
}
return metadata.NewGetShareDetailOK().WithPayload(&rest_model_zrok.Share{
Token: shr.Token,
ZID: shr.ZId,
@ -71,7 +77,7 @@ func (h *shareDetailHandler) Handle(params metadata.GetShareDetailParams, princi
FrontendEndpoint: feEndpoint,
BackendProxyEndpoint: beProxyEndpoint,
Reserved: shr.Reserved,
Metrics: sparkData[shr.Token],
Activity: sparkData,
CreatedAt: shr.CreatedAt.UnixMilli(),
UpdatedAt: shr.UpdatedAt.UnixMilli(),
})

View File

@ -4,55 +4,114 @@ import (
"context"
"fmt"
"github.com/openziti/zrok/controller/store"
"github.com/sirupsen/logrus"
"strconv"
)
func sparkDataForShares(shrs []*store.Share) (map[string][]int64, error) {
out := make(map[string][]int64)
func sparkDataForEnvironments(envs []*store.Environment) (rx, tx map[int][]int64, err error) {
rx = make(map[int][]int64)
tx = make(map[int][]int64)
if len(envs) > 0 {
qapi := idb.QueryAPI(cfg.Metrics.Influx.Org)
if len(shrs) > 0 {
qapi := idb.QueryAPI(cfg.Influx.Org)
envFilter := "|> filter(fn: (r) =>"
for i, env := range envs {
if i > 0 {
envFilter += " or"
}
envFilter += fmt.Sprintf(" r[\"envId\"] == \"%d\"", env.Id)
}
envFilter += ")"
query := fmt.Sprintf("from(bucket: \"%v\")\n", cfg.Metrics.Influx.Bucket) +
"|> range(start: -5m)\n" +
"|> filter(fn: (r) => r[\"_measurement\"] == \"xfer\")\n" +
"|> filter(fn: (r) => r[\"_field\"] == \"rx\" or r[\"_field\"] == \"tx\")\n" +
"|> filter(fn: (r) => r[\"namespace\"] == \"backend\")\n" +
envFilter +
"|> drop(columns: [\"share\", \"acctId\"])\n" +
"|> aggregateWindow(every: 10s, fn: sum, createEmpty: true)\n"
result, err := qapi.Query(context.Background(), sparkFluxQuery(shrs))
result, err := qapi.Query(context.Background(), query)
if err != nil {
return nil, err
return nil, nil, err
}
for result.Next() {
combinedRate := int64(0)
readRate := result.Record().ValueByKey("bytesRead")
if readRate != nil {
combinedRate += readRate.(int64)
envIdS := result.Record().ValueByKey("envId").(string)
envId, err := strconv.ParseInt(envIdS, 10, 32)
if err != nil {
logrus.Errorf("error parsing '%v': %v", envIdS, err)
continue
}
writeRate := result.Record().ValueByKey("bytesWritten")
if writeRate != nil {
combinedRate += writeRate.(int64)
switch result.Record().Field() {
case "rx":
rxV := int64(0)
if v, ok := result.Record().Value().(int64); ok {
rxV = v
}
rxData := append(rx[int(envId)], rxV)
rx[int(envId)] = rxData
case "tx":
txV := int64(0)
if v, ok := result.Record().Value().(int64); ok {
txV = v
}
txData := append(tx[int(envId)], txV)
tx[int(envId)] = txData
}
shrToken := result.Record().ValueByKey("share").(string)
shrMetrics := out[shrToken]
shrMetrics = append(shrMetrics, combinedRate)
out[shrToken] = shrMetrics
}
}
return out, nil
return rx, tx, nil
}
func sparkFluxQuery(shrs []*store.Share) string {
shrFilter := "|> filter(fn: (r) =>"
for i, shr := range shrs {
if i > 0 {
shrFilter += " or"
func sparkDataForShares(shrs []*store.Share) (rx, tx map[string][]int64, err error) {
rx = make(map[string][]int64)
tx = make(map[string][]int64)
if len(shrs) > 0 {
qapi := idb.QueryAPI(cfg.Metrics.Influx.Org)
shrFilter := "|> filter(fn: (r) =>"
for i, shr := range shrs {
if i > 0 {
shrFilter += " or"
}
shrFilter += fmt.Sprintf(" r[\"share\"] == \"%v\"", shr.Token)
}
shrFilter += ")"
query := fmt.Sprintf("from(bucket: \"%v\")\n", cfg.Metrics.Influx.Bucket) +
"|> range(start: -5m)\n" +
"|> filter(fn: (r) => r[\"_measurement\"] == \"xfer\")\n" +
"|> filter(fn: (r) => r[\"_field\"] == \"rx\" or r[\"_field\"] == \"tx\")\n" +
"|> filter(fn: (r) => r[\"namespace\"] == \"backend\")\n" +
shrFilter +
"|> aggregateWindow(every: 10s, fn: sum, createEmpty: true)\n"
result, err := qapi.Query(context.Background(), query)
if err != nil {
return nil, nil, err
}
for result.Next() {
shrToken := result.Record().ValueByKey("share").(string)
switch result.Record().Field() {
case "rx":
rxV := int64(0)
if v, ok := result.Record().Value().(int64); ok {
rxV = v
}
rxData := append(rx[shrToken], rxV)
rx[shrToken] = rxData
case "tx":
txV := int64(0)
if v, ok := result.Record().Value().(int64); ok {
txV = v
}
txData := append(tx[shrToken], txV)
tx[shrToken] = txData
}
}
shrFilter += fmt.Sprintf(" r[\"share\"] == \"%v\"", shr.Token)
}
shrFilter += ")"
query := "read = from(bucket: \"zrok\")" +
"|> range(start: -5m)" +
"|> filter(fn: (r) => r[\"_measurement\"] == \"xfer\")" +
"|> filter(fn: (r) => r[\"_field\"] == \"bytesRead\" or r[\"_field\"] == \"bytesWritten\")" +
"|> filter(fn: (r) => r[\"namespace\"] == \"backend\")" +
shrFilter +
"|> aggregateWindow(every: 5s, fn: sum, createEmpty: true)\n" +
"|> pivot(rowKey:[\"_time\"], columnKey: [\"_field\"], valueColumn: \"_value\")" +
"|> yield(name: \"last\")"
return query
return rx, tx, nil
}

View File

@ -15,7 +15,7 @@ type Account struct {
Deleted bool
}
func (self *Store) CreateAccount(a *Account, tx *sqlx.Tx) (int, error) {
func (str *Store) CreateAccount(a *Account, tx *sqlx.Tx) (int, error) {
stmt, err := tx.Prepare("insert into accounts (email, salt, password, token, limitless) values ($1, $2, $3, $4, $5) returning id")
if err != nil {
return 0, errors.Wrap(err, "error preparing accounts insert statement")
@ -27,7 +27,7 @@ func (self *Store) CreateAccount(a *Account, tx *sqlx.Tx) (int, error) {
return id, nil
}
func (self *Store) GetAccount(id int, tx *sqlx.Tx) (*Account, error) {
func (str *Store) GetAccount(id int, tx *sqlx.Tx) (*Account, error) {
a := &Account{}
if err := tx.QueryRowx("select * from accounts where id = $1", id).StructScan(a); err != nil {
return nil, errors.Wrap(err, "error selecting account by id")
@ -35,7 +35,7 @@ func (self *Store) GetAccount(id int, tx *sqlx.Tx) (*Account, error) {
return a, nil
}
func (self *Store) FindAccountWithEmail(email string, tx *sqlx.Tx) (*Account, error) {
func (str *Store) FindAccountWithEmail(email string, tx *sqlx.Tx) (*Account, error) {
a := &Account{}
if err := tx.QueryRowx("select * from accounts where email = $1 and not deleted", email).StructScan(a); err != nil {
return nil, errors.Wrap(err, "error selecting account by email")
@ -43,7 +43,7 @@ func (self *Store) FindAccountWithEmail(email string, tx *sqlx.Tx) (*Account, er
return a, nil
}
func (self *Store) FindAccountWithEmailAndDeleted(email string, tx *sqlx.Tx) (*Account, error) {
func (str *Store) FindAccountWithEmailAndDeleted(email string, tx *sqlx.Tx) (*Account, error) {
a := &Account{}
if err := tx.QueryRowx("select * from accounts where email = $1", email).StructScan(a); err != nil {
return nil, errors.Wrap(err, "error selecting acount by email")
@ -51,7 +51,7 @@ func (self *Store) FindAccountWithEmailAndDeleted(email string, tx *sqlx.Tx) (*A
return a, nil
}
func (self *Store) FindAccountWithToken(token string, tx *sqlx.Tx) (*Account, error) {
func (str *Store) FindAccountWithToken(token string, tx *sqlx.Tx) (*Account, error) {
a := &Account{}
if err := tx.QueryRowx("select * from accounts where token = $1 and not deleted", token).StructScan(a); err != nil {
return nil, errors.Wrap(err, "error selecting account by token")
@ -59,7 +59,7 @@ func (self *Store) FindAccountWithToken(token string, tx *sqlx.Tx) (*Account, er
return a, nil
}
func (self *Store) UpdateAccount(a *Account, tx *sqlx.Tx) (int, error) {
func (str *Store) UpdateAccount(a *Account, tx *sqlx.Tx) (int, error) {
stmt, err := tx.Prepare("update accounts set email=$1, salt=$2, password=$3, token=$4, limitless=$5 where id = $6")
if err != nil {
return 0, errors.Wrap(err, "error preparing accounts update statement")

View File

@ -0,0 +1,65 @@
package store
import (
"github.com/jmoiron/sqlx"
"github.com/pkg/errors"
)
type AccountLimitJournal struct {
Model
AccountId int
RxBytes int64
TxBytes int64
Action LimitJournalAction
}
func (str *Store) CreateAccountLimitJournal(j *AccountLimitJournal, trx *sqlx.Tx) (int, error) {
stmt, err := trx.Prepare("insert into account_limit_journal (account_id, rx_bytes, tx_bytes, action) values ($1, $2, $3, $4) returning id")
if err != nil {
return 0, errors.Wrap(err, "error preparing account_limit_journal insert statement")
}
var id int
if err := stmt.QueryRow(j.AccountId, j.RxBytes, j.TxBytes, j.Action).Scan(&id); err != nil {
return 0, errors.Wrap(err, "error executing account_limit_journal insert statement")
}
return id, nil
}
func (str *Store) IsAccountLimitJournalEmpty(acctId int, trx *sqlx.Tx) (bool, error) {
count := 0
if err := trx.QueryRowx("select count(0) from account_limit_journal where account_id = $1", acctId).Scan(&count); err != nil {
return false, err
}
return count == 0, nil
}
func (str *Store) FindLatestAccountLimitJournal(acctId int, trx *sqlx.Tx) (*AccountLimitJournal, error) {
j := &AccountLimitJournal{}
if err := trx.QueryRowx("select * from account_limit_journal where account_id = $1 order by id desc limit 1", acctId).StructScan(j); err != nil {
return nil, errors.Wrap(err, "error finding account_limit_journal by account_id")
}
return j, nil
}
func (str *Store) FindAllLatestAccountLimitJournal(trx *sqlx.Tx) ([]*AccountLimitJournal, error) {
rows, err := trx.Queryx("select id, account_id, rx_bytes, tx_bytes, action, created_at, updated_at from account_limit_journal where id in (select max(id) as id from account_limit_journal group by account_id)")
if err != nil {
return nil, errors.Wrap(err, "error selecting all latest account_limit_journal")
}
var aljs []*AccountLimitJournal
for rows.Next() {
alj := &AccountLimitJournal{}
if err := rows.StructScan(alj); err != nil {
return nil, errors.Wrap(err, "error scanning account_limit_journal")
}
aljs = append(aljs, alj)
}
return aljs, nil
}
func (str *Store) DeleteAccountLimitJournalForAccount(acctId int, trx *sqlx.Tx) error {
if _, err := trx.Exec("delete from account_limit_journal where account_id = $1", acctId); err != nil {
return errors.Wrapf(err, "error deleting account_limit journal for '#%d'", acctId)
}
return nil
}

View File

@ -0,0 +1,79 @@
package store
import (
"github.com/stretchr/testify/assert"
"testing"
)
func TestAccountLimitJournal(t *testing.T) {
str, err := Open(&Config{Path: ":memory:", Type: "sqlite3"})
assert.Nil(t, err)
assert.NotNil(t, str)
trx, err := str.Begin()
assert.Nil(t, err)
assert.NotNil(t, trx)
aljEmpty, err := str.IsAccountLimitJournalEmpty(1, trx)
assert.Nil(t, err)
assert.True(t, aljEmpty)
acctId, err := str.CreateAccount(&Account{Email: "nobody@nowehere.com", Salt: "salt", Password: "password", Token: "token", Limitless: false, Deleted: false}, trx)
assert.Nil(t, err)
_, err = str.CreateAccountLimitJournal(&AccountLimitJournal{AccountId: acctId, RxBytes: 1024, TxBytes: 2048, Action: WarningAction}, trx)
assert.Nil(t, err)
aljEmpty, err = str.IsAccountLimitJournalEmpty(acctId, trx)
assert.Nil(t, err)
assert.False(t, aljEmpty)
latestAlj, err := str.FindLatestAccountLimitJournal(acctId, trx)
assert.Nil(t, err)
assert.NotNil(t, latestAlj)
assert.Equal(t, int64(1024), latestAlj.RxBytes)
assert.Equal(t, int64(2048), latestAlj.TxBytes)
assert.Equal(t, WarningAction, latestAlj.Action)
_, err = str.CreateAccountLimitJournal(&AccountLimitJournal{AccountId: acctId, RxBytes: 2048, TxBytes: 4096, Action: LimitAction}, trx)
assert.Nil(t, err)
latestAlj, err = str.FindLatestAccountLimitJournal(acctId, trx)
assert.Nil(t, err)
assert.NotNil(t, latestAlj)
assert.Equal(t, int64(2048), latestAlj.RxBytes)
assert.Equal(t, int64(4096), latestAlj.TxBytes)
assert.Equal(t, LimitAction, latestAlj.Action)
}
func TestFindAllLatestAccountLimitJournal(t *testing.T) {
str, err := Open(&Config{Path: ":memory:", Type: "sqlite3"})
assert.Nil(t, err)
assert.NotNil(t, str)
trx, err := str.Begin()
assert.Nil(t, err)
assert.NotNil(t, trx)
acctId1, err := str.CreateAccount(&Account{Email: "nobody@nowehere.com", Salt: "salt1", Password: "password1", Token: "token1", Limitless: false, Deleted: false}, trx)
assert.Nil(t, err)
_, err = str.CreateAccountLimitJournal(&AccountLimitJournal{AccountId: acctId1, RxBytes: 2048, TxBytes: 4096, Action: WarningAction}, trx)
assert.Nil(t, err)
_, err = str.CreateAccountLimitJournal(&AccountLimitJournal{AccountId: acctId1, RxBytes: 2048, TxBytes: 4096, Action: ClearAction}, trx)
assert.Nil(t, err)
aljId13, err := str.CreateAccountLimitJournal(&AccountLimitJournal{AccountId: acctId1, RxBytes: 2048, TxBytes: 4096, Action: LimitAction}, trx)
assert.Nil(t, err)
acctId2, err := str.CreateAccount(&Account{Email: "someone@somewhere.com", Salt: "salt2", Password: "password2", Token: "token2", Limitless: false, Deleted: false}, trx)
assert.Nil(t, err)
aljId21, err := str.CreateAccountLimitJournal(&AccountLimitJournal{AccountId: acctId2, RxBytes: 2048, TxBytes: 4096, Action: WarningAction}, trx)
assert.Nil(t, err)
aljs, err := str.FindAllLatestAccountLimitJournal(trx)
assert.Nil(t, err)
assert.Equal(t, 2, len(aljs))
assert.Equal(t, aljId13, aljs[0].Id)
assert.Equal(t, aljId21, aljs[1].Id)
}

View File

@ -17,7 +17,7 @@ type AccountRequest struct {
Deleted bool
}
func (self *Store) CreateAccountRequest(ar *AccountRequest, tx *sqlx.Tx) (int, error) {
func (str *Store) CreateAccountRequest(ar *AccountRequest, tx *sqlx.Tx) (int, error) {
stmt, err := tx.Prepare("insert into account_requests (token, email, source_address) values ($1, $2, $3) returning id")
if err != nil {
return 0, errors.Wrap(err, "error preparing account_requests insert statement")
@ -29,7 +29,7 @@ func (self *Store) CreateAccountRequest(ar *AccountRequest, tx *sqlx.Tx) (int, e
return id, nil
}
func (self *Store) GetAccountRequest(id int, tx *sqlx.Tx) (*AccountRequest, error) {
func (str *Store) GetAccountRequest(id int, tx *sqlx.Tx) (*AccountRequest, error) {
ar := &AccountRequest{}
if err := tx.QueryRowx("select * from account_requests where id = $1", id).StructScan(ar); err != nil {
return nil, errors.Wrap(err, "error selecting account_request by id")
@ -37,7 +37,7 @@ func (self *Store) GetAccountRequest(id int, tx *sqlx.Tx) (*AccountRequest, erro
return ar, nil
}
func (self *Store) FindAccountRequestWithToken(token string, tx *sqlx.Tx) (*AccountRequest, error) {
func (str *Store) FindAccountRequestWithToken(token string, tx *sqlx.Tx) (*AccountRequest, error) {
ar := &AccountRequest{}
if err := tx.QueryRowx("select * from account_requests where token = $1 and not deleted", token).StructScan(ar); err != nil {
return nil, errors.Wrap(err, "error selecting account_request by token")
@ -45,9 +45,9 @@ func (self *Store) FindAccountRequestWithToken(token string, tx *sqlx.Tx) (*Acco
return ar, nil
}
func (self *Store) FindExpiredAccountRequests(before time.Time, limit int, tx *sqlx.Tx) ([]*AccountRequest, error) {
func (str *Store) FindExpiredAccountRequests(before time.Time, limit int, tx *sqlx.Tx) ([]*AccountRequest, error) {
var sql string
switch self.cfg.Type {
switch str.cfg.Type {
case "postgres":
sql = "select * from account_requests where created_at < $1 and not deleted limit %d for update"
@ -55,7 +55,7 @@ func (self *Store) FindExpiredAccountRequests(before time.Time, limit int, tx *s
sql = "select * from account_requests where created_at < $1 and not deleted limit %d"
default:
return nil, errors.Errorf("unknown database type '%v'", self.cfg.Type)
return nil, errors.Errorf("unknown database type '%v'", str.cfg.Type)
}
rows, err := tx.Queryx(fmt.Sprintf(sql, limit), before)
@ -73,7 +73,7 @@ func (self *Store) FindExpiredAccountRequests(before time.Time, limit int, tx *s
return ars, nil
}
func (self *Store) FindAccountRequestWithEmail(email string, tx *sqlx.Tx) (*AccountRequest, error) {
func (str *Store) FindAccountRequestWithEmail(email string, tx *sqlx.Tx) (*AccountRequest, error) {
ar := &AccountRequest{}
if err := tx.QueryRowx("select * from account_requests where email = $1 and not deleted", email).StructScan(ar); err != nil {
return nil, errors.Wrap(err, "error selecting account_request by email")
@ -81,7 +81,7 @@ func (self *Store) FindAccountRequestWithEmail(email string, tx *sqlx.Tx) (*Acco
return ar, nil
}
func (self *Store) DeleteAccountRequest(id int, tx *sqlx.Tx) error {
func (str *Store) DeleteAccountRequest(id int, tx *sqlx.Tx) error {
stmt, err := tx.Prepare("update account_requests set deleted = true, updated_at = current_timestamp where id = $1")
if err != nil {
return errors.Wrap(err, "error preparing account_requests delete statement")
@ -93,7 +93,7 @@ func (self *Store) DeleteAccountRequest(id int, tx *sqlx.Tx) error {
return nil
}
func (self *Store) DeleteMultipleAccountRequests(ids []int, tx *sqlx.Tx) error {
func (str *Store) DeleteMultipleAccountRequests(ids []int, tx *sqlx.Tx) error {
if len(ids) == 0 {
return nil
}

View File

@ -15,7 +15,7 @@ type Environment struct {
Deleted bool
}
func (self *Store) CreateEnvironment(accountId int, i *Environment, tx *sqlx.Tx) (int, error) {
func (str *Store) CreateEnvironment(accountId int, i *Environment, tx *sqlx.Tx) (int, error) {
stmt, err := tx.Prepare("insert into environments (account_id, description, host, address, z_id) values ($1, $2, $3, $4, $5) returning id")
if err != nil {
return 0, errors.Wrap(err, "error preparing environments insert statement")
@ -27,7 +27,7 @@ func (self *Store) CreateEnvironment(accountId int, i *Environment, tx *sqlx.Tx)
return id, nil
}
func (self *Store) CreateEphemeralEnvironment(i *Environment, tx *sqlx.Tx) (int, error) {
func (str *Store) CreateEphemeralEnvironment(i *Environment, tx *sqlx.Tx) (int, error) {
stmt, err := tx.Prepare("insert into environments (description, host, address, z_id) values ($1, $2, $3, $4) returning id")
if err != nil {
return 0, errors.Wrap(err, "error preparing environments (ephemeral) insert statement")
@ -39,7 +39,7 @@ func (self *Store) CreateEphemeralEnvironment(i *Environment, tx *sqlx.Tx) (int,
return id, nil
}
func (self *Store) GetEnvironment(id int, tx *sqlx.Tx) (*Environment, error) {
func (str *Store) GetEnvironment(id int, tx *sqlx.Tx) (*Environment, error) {
i := &Environment{}
if err := tx.QueryRowx("select * from environments where id = $1", id).StructScan(i); err != nil {
return nil, errors.Wrap(err, "error selecting environment by id")
@ -47,7 +47,7 @@ func (self *Store) GetEnvironment(id int, tx *sqlx.Tx) (*Environment, error) {
return i, nil
}
func (self *Store) FindEnvironmentsForAccount(accountId int, tx *sqlx.Tx) ([]*Environment, error) {
func (str *Store) FindEnvironmentsForAccount(accountId int, tx *sqlx.Tx) ([]*Environment, error) {
rows, err := tx.Queryx("select environments.* from environments where account_id = $1 and not deleted", accountId)
if err != nil {
return nil, errors.Wrap(err, "error selecting environments by account id")
@ -63,7 +63,7 @@ func (self *Store) FindEnvironmentsForAccount(accountId int, tx *sqlx.Tx) ([]*En
return is, nil
}
func (self *Store) FindEnvironmentForAccount(envZId string, accountId int, tx *sqlx.Tx) (*Environment, error) {
func (str *Store) FindEnvironmentForAccount(envZId string, accountId int, tx *sqlx.Tx) (*Environment, error) {
env := &Environment{}
if err := tx.QueryRowx("select environments.* from environments where z_id = $1 and account_id = $2 and not deleted", envZId, accountId).StructScan(env); err != nil {
return nil, errors.Wrap(err, "error finding environment by z_id and account_id")
@ -71,7 +71,7 @@ func (self *Store) FindEnvironmentForAccount(envZId string, accountId int, tx *s
return env, nil
}
func (self *Store) DeleteEnvironment(id int, tx *sqlx.Tx) error {
func (str *Store) DeleteEnvironment(id int, tx *sqlx.Tx) error {
stmt, err := tx.Prepare("update environments set updated_at = current_timestamp, deleted = true where id = $1")
if err != nil {
return errors.Wrap(err, "error preparing environments delete statement")

View File

@ -0,0 +1,93 @@
package store
import (
"fmt"
"github.com/jmoiron/sqlx"
"github.com/pkg/errors"
)
type EnvironmentLimitJournal struct {
Model
EnvironmentId int
RxBytes int64
TxBytes int64
Action LimitJournalAction
}
func (str *Store) CreateEnvironmentLimitJournal(j *EnvironmentLimitJournal, trx *sqlx.Tx) (int, error) {
stmt, err := trx.Prepare("insert into environment_limit_journal (environment_id, rx_bytes, tx_bytes, action) values ($1, $2, $3, $4) returning id")
if err != nil {
return 0, errors.Wrap(err, "error preparing environment_limit_journal insert statement")
}
var id int
if err := stmt.QueryRow(j.EnvironmentId, j.RxBytes, j.TxBytes, j.Action).Scan(&id); err != nil {
return 0, errors.Wrap(err, "error executing environment_limit_journal insert statement")
}
return id, nil
}
func (str *Store) IsEnvironmentLimitJournalEmpty(envId int, trx *sqlx.Tx) (bool, error) {
count := 0
if err := trx.QueryRowx("select count(0) from environment_limit_journal where environment_id = $1", envId).Scan(&count); err != nil {
return false, err
}
return count == 0, nil
}
func (str *Store) FindLatestEnvironmentLimitJournal(envId int, trx *sqlx.Tx) (*EnvironmentLimitJournal, error) {
j := &EnvironmentLimitJournal{}
if err := trx.QueryRowx("select * from environment_limit_journal where environment_id = $1 order by created_at desc limit 1", envId).StructScan(j); err != nil {
return nil, errors.Wrap(err, "error finding environment_limit_journal by environment_id")
}
return j, nil
}
func (str *Store) FindSelectedLatestEnvironmentLimitJournal(envIds []int, trx *sqlx.Tx) ([]*EnvironmentLimitJournal, error) {
if len(envIds) < 1 {
return nil, nil
}
in := "("
for i := range envIds {
if i > 0 {
in += ", "
}
in += fmt.Sprintf("%d", envIds[i])
}
in += ")"
rows, err := trx.Queryx("select id, environment_id, rx_bytes, tx_bytes, action, created_at, updated_at from environment_limit_journal where id in (select max(id) as id from environment_limit_journal group by environment_id) and environment_id in " + in)
if err != nil {
return nil, errors.Wrap(err, "error selecting all latest environment_limit_journal")
}
var eljs []*EnvironmentLimitJournal
for rows.Next() {
elj := &EnvironmentLimitJournal{}
if err := rows.StructScan(elj); err != nil {
return nil, errors.Wrap(err, "error scanning environment_limit_journal")
}
eljs = append(eljs, elj)
}
return eljs, nil
}
func (str *Store) FindAllLatestEnvironmentLimitJournal(trx *sqlx.Tx) ([]*EnvironmentLimitJournal, error) {
rows, err := trx.Queryx("select id, environment_id, rx_bytes, tx_bytes, action, created_at, updated_at from environment_limit_journal where id in (select max(id) as id from environment_limit_journal group by environment_id)")
if err != nil {
return nil, errors.Wrap(err, "error selecting all latest environment_limit_journal")
}
var eljs []*EnvironmentLimitJournal
for rows.Next() {
elj := &EnvironmentLimitJournal{}
if err := rows.StructScan(elj); err != nil {
return nil, errors.Wrap(err, "error scanning environment_limit_journal")
}
eljs = append(eljs, elj)
}
return eljs, nil
}
func (str *Store) DeleteEnvironmentLimitJournalForEnvironment(envId int, trx *sqlx.Tx) error {
if _, err := trx.Exec("delete from environment_limit_journal where environment_id = $1", envId); err != nil {
return errors.Wrapf(err, "error deleteing environment_limit_journal for '#%d'", envId)
}
return nil
}

View File

@ -7,22 +7,23 @@ import (
type Frontend struct {
Model
EnvironmentId *int
Token string
ZId string
PublicName *string
UrlTemplate *string
Reserved bool
Deleted bool
EnvironmentId *int
PrivateShareId *int
Token string
ZId string
PublicName *string
UrlTemplate *string
Reserved bool
Deleted bool
}
func (str *Store) CreateFrontend(envId int, f *Frontend, tx *sqlx.Tx) (int, error) {
stmt, err := tx.Prepare("insert into frontends (environment_id, token, z_id, public_name, url_template, reserved) values ($1, $2, $3, $4, $5, $6) returning id")
stmt, err := tx.Prepare("insert into frontends (environment_id, private_share_id, token, z_id, public_name, url_template, reserved) values ($1, $2, $3, $4, $5, $6, $7) returning id")
if err != nil {
return 0, errors.Wrap(err, "error preparing frontends insert statement")
}
var id int
if err := stmt.QueryRow(envId, f.Token, f.ZId, f.PublicName, f.UrlTemplate, f.Reserved).Scan(&id); err != nil {
if err := stmt.QueryRow(envId, f.PrivateShareId, f.Token, f.ZId, f.PublicName, f.UrlTemplate, f.Reserved).Scan(&id); err != nil {
return 0, errors.Wrap(err, "error executing frontends insert statement")
}
return id, nil
@ -104,13 +105,29 @@ func (str *Store) FindPublicFrontends(tx *sqlx.Tx) ([]*Frontend, error) {
return frontends, nil
}
func (str *Store) FindFrontendsForPrivateShare(shrId int, tx *sqlx.Tx) ([]*Frontend, error) {
rows, err := tx.Queryx("select frontends.* from frontends where private_share_id = $1 and not deleted", shrId)
if err != nil {
return nil, errors.Wrap(err, "error selecting frontends by private_share_id")
}
var is []*Frontend
for rows.Next() {
i := &Frontend{}
if err := rows.StructScan(i); err != nil {
return nil, errors.Wrap(err, "error scanning frontend")
}
is = append(is, i)
}
return is, nil
}
func (str *Store) UpdateFrontend(fe *Frontend, tx *sqlx.Tx) error {
sql := "update frontends set environment_id = $1, token = $2, z_id = $3, public_name = $4, url_template = $5, reserved = $6, updated_at = current_timestamp where id = $7"
sql := "update frontends set environment_id = $1, private_share_id = $2, token = $3, z_id = $4, public_name = $5, url_template = $6, reserved = $7, updated_at = current_timestamp where id = $8"
stmt, err := tx.Prepare(sql)
if err != nil {
return errors.Wrap(err, "error preparing frontends update statement")
}
_, err = stmt.Exec(fe.EnvironmentId, fe.Token, fe.ZId, fe.PublicName, fe.UrlTemplate, fe.Reserved, fe.Id)
_, err = stmt.Exec(fe.EnvironmentId, fe.PrivateShareId, fe.Token, fe.ZId, fe.PublicName, fe.UrlTemplate, fe.Reserved, fe.Id)
if err != nil {
return errors.Wrap(err, "error executing frontends update statement")
}

View File

@ -0,0 +1,9 @@
package store
type LimitJournalAction string
const (
LimitAction LimitJournalAction = "limit"
WarningAction LimitJournalAction = "warning"
ClearAction LimitJournalAction = "clear"
)

View File

@ -16,7 +16,7 @@ type PasswordResetRequest struct {
Deleted bool
}
func (self *Store) CreatePasswordResetRequest(prr *PasswordResetRequest, tx *sqlx.Tx) (int, error) {
func (str *Store) CreatePasswordResetRequest(prr *PasswordResetRequest, tx *sqlx.Tx) (int, error) {
stmt, err := tx.Prepare("insert into password_reset_requests (account_id, token) values ($1, $2) ON CONFLICT(account_id) DO UPDATE SET token=$2 returning id")
if err != nil {
return 0, errors.Wrap(err, "error preparing password_reset_requests insert statement")
@ -28,7 +28,7 @@ func (self *Store) CreatePasswordResetRequest(prr *PasswordResetRequest, tx *sql
return id, nil
}
func (self *Store) FindPasswordResetRequestWithToken(token string, tx *sqlx.Tx) (*PasswordResetRequest, error) {
func (str *Store) FindPasswordResetRequestWithToken(token string, tx *sqlx.Tx) (*PasswordResetRequest, error) {
prr := &PasswordResetRequest{}
if err := tx.QueryRowx("select * from password_reset_requests where token = $1 and not deleted", token).StructScan(prr); err != nil {
return nil, errors.Wrap(err, "error selecting password_reset_requests by token")
@ -36,16 +36,16 @@ func (self *Store) FindPasswordResetRequestWithToken(token string, tx *sqlx.Tx)
return prr, nil
}
func (self *Store) FindExpiredPasswordResetRequests(before time.Time, limit int, tx *sqlx.Tx) ([]*PasswordResetRequest, error) {
func (str *Store) FindExpiredPasswordResetRequests(before time.Time, limit int, tx *sqlx.Tx) ([]*PasswordResetRequest, error) {
var sql string
switch self.cfg.Type {
switch str.cfg.Type {
case "postgres":
sql = "select * from password_reset_requests where created_at < $1 and not deleted limit %d for update"
case "sqlite3":
sql = "select * from password_reset_requests where created_at < $1 and not deleted limit %d"
default:
return nil, errors.Errorf("unknown database type '%v'", self.cfg.Type)
return nil, errors.Errorf("unknown database type '%v'", str.cfg.Type)
}
rows, err := tx.Queryx(fmt.Sprintf(sql, limit), before)
@ -63,7 +63,7 @@ func (self *Store) FindExpiredPasswordResetRequests(before time.Time, limit int,
return prrs, nil
}
func (self *Store) DeletePasswordResetRequest(id int, tx *sqlx.Tx) error {
func (str *Store) DeletePasswordResetRequest(id int, tx *sqlx.Tx) error {
stmt, err := tx.Prepare("update password_reset_requests set updated_at = current_timestamp, deleted = true where id = $1")
if err != nil {
return errors.Wrap(err, "error preparing password_reset_requests delete statement")
@ -75,7 +75,7 @@ func (self *Store) DeletePasswordResetRequest(id int, tx *sqlx.Tx) error {
return nil
}
func (self *Store) DeleteMultiplePasswordResetRequests(ids []int, tx *sqlx.Tx) error {
func (str *Store) DeleteMultiplePasswordResetRequests(ids []int, tx *sqlx.Tx) error {
if len(ids) == 0 {
return nil
}

View File

@ -19,7 +19,7 @@ type Share struct {
Deleted bool
}
func (self *Store) CreateShare(envId int, shr *Share, tx *sqlx.Tx) (int, error) {
func (str *Store) CreateShare(envId int, shr *Share, tx *sqlx.Tx) (int, error) {
stmt, err := tx.Prepare("insert into shares (environment_id, z_id, token, share_mode, backend_mode, frontend_selection, frontend_endpoint, backend_proxy_endpoint, reserved) values ($1, $2, $3, $4, $5, $6, $7, $8, $9) returning id")
if err != nil {
return 0, errors.Wrap(err, "error preparing shares insert statement")
@ -31,7 +31,7 @@ func (self *Store) CreateShare(envId int, shr *Share, tx *sqlx.Tx) (int, error)
return id, nil
}
func (self *Store) GetShare(id int, tx *sqlx.Tx) (*Share, error) {
func (str *Store) GetShare(id int, tx *sqlx.Tx) (*Share, error) {
shr := &Share{}
if err := tx.QueryRowx("select * from shares where id = $1", id).StructScan(shr); err != nil {
return nil, errors.Wrap(err, "error selecting share by id")
@ -39,7 +39,7 @@ func (self *Store) GetShare(id int, tx *sqlx.Tx) (*Share, error) {
return shr, nil
}
func (self *Store) FindAllShares(tx *sqlx.Tx) ([]*Share, error) {
func (str *Store) FindAllShares(tx *sqlx.Tx) ([]*Share, error) {
rows, err := tx.Queryx("select * from shares where not deleted order by id")
if err != nil {
return nil, errors.Wrap(err, "error selecting all shares")
@ -55,7 +55,7 @@ func (self *Store) FindAllShares(tx *sqlx.Tx) ([]*Share, error) {
return shrs, nil
}
func (self *Store) FindShareWithToken(shrToken string, tx *sqlx.Tx) (*Share, error) {
func (str *Store) FindShareWithToken(shrToken string, tx *sqlx.Tx) (*Share, error) {
shr := &Share{}
if err := tx.QueryRowx("select * from shares where token = $1 and not deleted", shrToken).StructScan(shr); err != nil {
return nil, errors.Wrap(err, "error selecting share by token")
@ -63,7 +63,7 @@ func (self *Store) FindShareWithToken(shrToken string, tx *sqlx.Tx) (*Share, err
return shr, nil
}
func (self *Store) FindShareWithZIdAndDeleted(zId string, tx *sqlx.Tx) (*Share, error) {
func (str *Store) FindShareWithZIdAndDeleted(zId string, tx *sqlx.Tx) (*Share, error) {
shr := &Share{}
if err := tx.QueryRowx("select * from shares where z_id = $1", zId).StructScan(shr); err != nil {
return nil, errors.Wrap(err, "error selecting share by z_id")
@ -71,7 +71,7 @@ func (self *Store) FindShareWithZIdAndDeleted(zId string, tx *sqlx.Tx) (*Share,
return shr, nil
}
func (self *Store) FindSharesForEnvironment(envId int, tx *sqlx.Tx) ([]*Share, error) {
func (str *Store) FindSharesForEnvironment(envId int, tx *sqlx.Tx) ([]*Share, error) {
rows, err := tx.Queryx("select shares.* from shares where environment_id = $1 and not deleted", envId)
if err != nil {
return nil, errors.Wrap(err, "error selecting shares by environment id")
@ -87,7 +87,7 @@ func (self *Store) FindSharesForEnvironment(envId int, tx *sqlx.Tx) ([]*Share, e
return shrs, nil
}
func (self *Store) UpdateShare(shr *Share, tx *sqlx.Tx) error {
func (str *Store) UpdateShare(shr *Share, tx *sqlx.Tx) error {
sql := "update shares set z_id = $1, token = $2, share_mode = $3, backend_mode = $4, frontend_selection = $5, frontend_endpoint = $6, backend_proxy_endpoint = $7, reserved = $8, updated_at = current_timestamp where id = $9"
stmt, err := tx.Prepare(sql)
if err != nil {
@ -100,7 +100,7 @@ func (self *Store) UpdateShare(shr *Share, tx *sqlx.Tx) error {
return nil
}
func (self *Store) DeleteShare(id int, tx *sqlx.Tx) error {
func (str *Store) DeleteShare(id int, tx *sqlx.Tx) error {
stmt, err := tx.Prepare("update shares set updated_at = current_timestamp, deleted = true where id = $1")
if err != nil {
return errors.Wrap(err, "error preparing shares delete statement")

View File

@ -0,0 +1,93 @@
package store
import (
"fmt"
"github.com/jmoiron/sqlx"
"github.com/pkg/errors"
)
type ShareLimitJournal struct {
Model
ShareId int
RxBytes int64
TxBytes int64
Action LimitJournalAction
}
func (str *Store) CreateShareLimitJournal(j *ShareLimitJournal, trx *sqlx.Tx) (int, error) {
stmt, err := trx.Prepare("insert into share_limit_journal (share_id, rx_bytes, tx_bytes, action) values ($1, $2, $3, $4) returning id")
if err != nil {
return 0, errors.Wrap(err, "error preparing share_limit_journal insert statement")
}
var id int
if err := stmt.QueryRow(j.ShareId, j.RxBytes, j.TxBytes, j.Action).Scan(&id); err != nil {
return 0, errors.Wrap(err, "error executing share_limit_journal insert statement")
}
return id, nil
}
func (str *Store) IsShareLimitJournalEmpty(shrId int, trx *sqlx.Tx) (bool, error) {
count := 0
if err := trx.QueryRowx("select count(0) from share_limit_journal where share_id = $1", shrId).Scan(&count); err != nil {
return false, err
}
return count == 0, nil
}
func (str *Store) FindLatestShareLimitJournal(shrId int, trx *sqlx.Tx) (*ShareLimitJournal, error) {
j := &ShareLimitJournal{}
if err := trx.QueryRowx("select * from share_limit_journal where share_id = $1 order by created_at desc limit 1", shrId).StructScan(j); err != nil {
return nil, errors.Wrap(err, "error finding share_limit_journal by share_id")
}
return j, nil
}
func (str *Store) FindSelectedLatestShareLimitjournal(shrIds []int, trx *sqlx.Tx) ([]*ShareLimitJournal, error) {
if len(shrIds) < 1 {
return nil, nil
}
in := "("
for i := range shrIds {
if i > 0 {
in += ", "
}
in += fmt.Sprintf("%d", shrIds[i])
}
in += ")"
rows, err := trx.Queryx("select id, share_id, rx_bytes, tx_bytes, action, created_at, updated_at from share_limit_journal where id in (select max(id) as id from share_limit_journal group by share_id) and share_id in " + in)
if err != nil {
return nil, errors.Wrap(err, "error selecting all latest share_limit_journal")
}
var sljs []*ShareLimitJournal
for rows.Next() {
slj := &ShareLimitJournal{}
if err := rows.StructScan(slj); err != nil {
return nil, errors.Wrap(err, "error scanning share_limit_journal")
}
sljs = append(sljs, slj)
}
return sljs, nil
}
func (str *Store) FindAllLatestShareLimitJournal(trx *sqlx.Tx) ([]*ShareLimitJournal, error) {
rows, err := trx.Queryx("select id, share_id, rx_bytes, tx_bytes, action, created_at, updated_at from share_limit_journal where id in (select max(id) as id from share_limit_journal group by share_id)")
if err != nil {
return nil, errors.Wrap(err, "error selecting all latest share_limit_journal")
}
var sljs []*ShareLimitJournal
for rows.Next() {
slj := &ShareLimitJournal{}
if err := rows.StructScan(slj); err != nil {
return nil, errors.Wrap(err, "error scanning share_limit_journal")
}
sljs = append(sljs, slj)
}
return sljs, nil
}
func (str *Store) DeleteShareLimitJournalForShare(shrId int, trx *sqlx.Tx) error {
if _, err := trx.Exec("delete from share_limit_journal where share_id = $1", shrId); err != nil {
return errors.Wrapf(err, "error deleting share_limit_journal for '#%d'", shrId)
}
return nil
}

View File

@ -0,0 +1,33 @@
-- +migrate Up
create type limit_action_type as enum ('clear', 'warning', 'limit');
create table account_limit_journal (
id serial primary key,
account_id integer references accounts(id),
rx_bytes bigint not null,
tx_bytes bigint not null,
action limit_action_type not null,
created_at timestamptz not null default(current_timestamp),
updated_at timestamptz not null default(current_timestamp)
);
create table environment_limit_journal (
id serial primary key,
environment_id integer references environments(id),
rx_bytes bigint not null,
tx_bytes bigint not null,
action limit_action_type not null,
created_at timestamptz not null default(current_timestamp),
updated_at timestamptz not null default(current_timestamp)
);
create table share_limit_journal (
id serial primary key,
share_id integer references shares(id),
rx_bytes bigint not null,
tx_bytes bigint not null,
action limit_action_type not null,
created_at timestamptz not null default(current_timestamp),
updated_at timestamptz not null default(current_timestamp)
);

View File

@ -0,0 +1,31 @@
-- +migrate Up
alter table frontends rename to frontends_old;
alter sequence frontends_id_seq rename to frontends_id_seq_old;
create table frontends (
id serial primary key,
environment_id integer references environments(id),
private_share_id integer references shares(id),
token varchar(32) not null unique,
z_id varchar(32) not null,
url_template varchar(1024),
public_name varchar(64) unique,
reserved boolean not null default(false),
created_at timestamptz not null default(current_timestamp),
updated_at timestamptz not null default(current_timestamp),
deleted boolean not null default(false)
);
insert into frontends (id, environment_id, token, z_id, url_template, public_name, reserved, created_at, updated_at, deleted)
select id, environment_id, token, z_id, url_template, public_name, reserved, created_at, updated_at, deleted from frontends_old;
select setval('frontends_id_seq', (select max(id) from frontends));
drop table frontends_old;
alter index frontends_pkey1 rename to frontends_pkey;
alter index frontends_public_name_key1 rename to frontends_public_name_key;
alter index frontends_token_key1 rename to frontends_token_key;
alter table frontends rename constraint frontends_environment_id_fkey1 to frontends_environment_id_fkey;

View File

@ -0,0 +1,4 @@
-- +migrate Up
alter type backend_mode rename value 'dav' to 'tcpTunnel';
alter type backend_mode add value 'udpTunnel';

View File

@ -0,0 +1,31 @@
-- +migrate Up
create table account_limit_journal (
id integer primary key,
account_id integer references accounts(id),
rx_bytes bigint not null,
tx_bytes bigint not null,
action limit_action_type not null,
created_at datetime not null default(strftime('%Y-%m-%d %H:%M:%f', 'now')),
updated_at datetime not null default(strftime('%Y-%m-%d %H:%M:%f', 'now'))
);
create table environment_limit_journal (
id integer primary key,
environment_id integer references environments(id),
rx_bytes bigint not null,
tx_bytes bigint not null,
action limit_action_type not null,
created_at datetime not null default(strftime('%Y-%m-%d %H:%M:%f', 'now')),
updated_at datetime not null default(strftime('%Y-%m-%d %H:%M:%f', 'now'))
);
create table share_limit_journal (
id integer primary key,
share_id integer references shares(id),
rx_bytes bigint not null,
tx_bytes bigint not null,
action limit_action_type not null,
created_at datetime not null default(strftime('%Y-%m-%d %H:%M:%f', 'now')),
updated_at datetime not null default(strftime('%Y-%m-%d %H:%M:%f', 'now'))
);

View File

@ -0,0 +1,3 @@
-- +migrate Up
alter table frontends add column private_share_id references shares(id);

View File

@ -0,0 +1,54 @@
-- +migrate Up
alter table shares rename to shares_old;
create table shares (
id integer primary key,
environment_id integer constraint fk_environments_shares references environments on delete cascade,
z_id string not null unique,
token string not null unique,
share_mode string not null,
backend_mode string not null,
frontend_selection string,
frontend_endpoint string,
backend_proxy_endpoint string,
reserved boolean not null default(false),
created_at datetime not null default(strftime('%Y-%m-%d %H:%M:%f', 'now')),
updated_at datetime not null default(strftime('%Y-%m-%d %H:%M:%f', 'now')), deleted boolean not null default(false),
constraint chk_z_id check (z_id <> ''),
constraint chk_token check (token <> ''),
constraint chk_share_mode check (share_mode == 'public' or share_mode == 'private'),
constraint chk_backend_mode check (backend_mode == 'proxy' or backend_mode == 'web' or backend_mode == 'tcpTunnel' or backend_mode == 'udpTunnel')
);
insert into shares select * from shares_old;
drop table shares_old;
alter table frontends rename to frontends_old;
create table frontends (
id integer primary key,
environment_id integer references environments(id),
token varchar(32) not null unique,
z_id varchar(32) not null,
public_name varchar(64) unique,
url_template varchar(1024),
reserved boolean not null default(false),
created_at datetime not null default(strftime('%Y-%m-%d %H:%M:%f', 'now')),
updated_at datetime not null default(strftime('%Y-%m-%d %H:%M:%f', 'now')),
deleted boolean not null default(false),
private_share_id integer references shares(id)
);
insert into frontends select * from frontends_old;
drop table frontends_old;
alter table share_limit_journal rename to share_limit_journal_old;
create table share_limit_journal (
id integer primary key,
share_id integer references shares(id),
rx_bytes bigint not null,
tx_bytes bigint not null,
action limit_action_type not null,
created_at datetime not null default(strftime('%Y-%m-%d %H:%M:%f', 'now')),
updated_at datetime not null default(strftime('%Y-%m-%d %H:%M:%f', 'now'))
);
insert into share_limit_journal select * from share_limit_journal_old;
drop table share_limit_journal_old;

View File

@ -62,15 +62,15 @@ func Open(cfg *Config) (*Store, error) {
return store, nil
}
func (self *Store) Begin() (*sqlx.Tx, error) {
return self.db.Beginx()
func (str *Store) Begin() (*sqlx.Tx, error) {
return str.db.Beginx()
}
func (self *Store) Close() error {
return self.db.Close()
func (str *Store) Close() error {
return str.db.Close()
}
func (self *Store) migrate(cfg *Config) error {
func (str *Store) migrate(cfg *Config) error {
switch cfg.Type {
case "sqlite3":
migrations := &migrate.EmbedFileSystemMigrationSource{
@ -78,7 +78,7 @@ func (self *Store) migrate(cfg *Config) error {
Root: "/",
}
migrate.SetTable("migrations")
n, err := migrate.Exec(self.db.DB, "sqlite3", migrations, migrate.Up)
n, err := migrate.Exec(str.db.DB, "sqlite3", migrations, migrate.Up)
if err != nil {
return errors.Wrap(err, "error running migrations")
}
@ -90,7 +90,7 @@ func (self *Store) migrate(cfg *Config) error {
Root: "/",
}
migrate.SetTable("migrations")
n, err := migrate.Exec(self.db.DB, "postgres", migrations, migrate.Up)
n, err := migrate.Exec(str.db.DB, "postgres", migrations, migrate.Up)
if err != nil {
return errors.Wrap(err, "error running migrations")
}

View File

@ -68,7 +68,7 @@ func (h *unaccessHandler) Handle(params share.UnaccessParams, principal *rest_mo
return share.NewUnaccessNotFound()
}
if err := zrokEdgeSdk.DeleteServicePolicy(envZId, fmt.Sprintf("tags.zrokShareToken=\"%v\" and tags.zrokFrontendToken=\"%v\" and type=1", shrToken, feToken), edge); err != nil {
if err := zrokEdgeSdk.DeleteServicePolicies(envZId, fmt.Sprintf("tags.zrokShareToken=\"%v\" and tags.zrokFrontendToken=\"%v\" and type=1", shrToken, feToken), edge); err != nil {
logrus.Errorf("error removing access to '%v' for '%v': %v", shrToken, envZId, err)
return share.NewUnaccessInternalServerError()
}

View File

@ -124,10 +124,10 @@ func (h *unshareHandler) deallocateResources(senv *store.Environment, shrToken,
if err := zrokEdgeSdk.DeleteServiceEdgeRouterPolicy(senv.ZId, shrToken, edge); err != nil {
return err
}
if err := zrokEdgeSdk.DeleteServicePolicyDial(senv.ZId, shrToken, edge); err != nil {
if err := zrokEdgeSdk.DeleteServicePoliciesDial(senv.ZId, shrToken, edge); err != nil {
return err
}
if err := zrokEdgeSdk.DeleteServicePolicyBind(senv.ZId, shrToken, edge); err != nil {
if err := zrokEdgeSdk.DeleteServicePoliciesBind(senv.ZId, shrToken, edge); err != nil {
return err
}
if err := zrokEdgeSdk.DeleteConfig(senv.ZId, shrToken, edge); err != nil {

View File

@ -3,6 +3,7 @@ package controller
import (
errors2 "github.com/go-openapi/errors"
"github.com/jaevor/go-nanoid"
"github.com/openziti/zrok/controller/config"
"github.com/openziti/zrok/rest_model_zrok"
"github.com/sirupsen/logrus"
"net/http"
@ -10,10 +11,10 @@ import (
)
type zrokAuthenticator struct {
cfg *Config
cfg *config.Config
}
func newZrokAuthenticator(cfg *Config) *zrokAuthenticator {
func newZrokAuthenticator(cfg *config.Config) *zrokAuthenticator {
return &zrokAuthenticator{cfg}
}

View File

@ -6,13 +6,13 @@ import (
"github.com/openziti/edge/rest_util"
)
type ZitiConfig struct {
type Config struct {
ApiEndpoint string
Username string
Password string `cf:"+secret"`
}
func Client(cfg *ZitiConfig) (*rest_management_api_client.ZitiEdgeManagement, error) {
func Client(cfg *Config) (*rest_management_api_client.ZitiEdgeManagement, error) {
caCerts, err := rest_util.GetControllerWellKnownCas(cfg.ApiEndpoint)
if err != nil {
return nil, err

View File

@ -78,16 +78,16 @@ func createServicePolicy(name string, semantic rest_model.Semantic, identityRole
return resp.Payload.Data.ID, nil
}
func DeleteServicePolicyBind(envZId, shrToken string, edge *rest_management_api_client.ZitiEdgeManagement) error {
return DeleteServicePolicy(envZId, fmt.Sprintf("tags.zrokShareToken=\"%v\" and type=%d", shrToken, servicePolicyBind), edge)
func DeleteServicePoliciesBind(envZId, shrToken string, edge *rest_management_api_client.ZitiEdgeManagement) error {
return DeleteServicePolicies(envZId, fmt.Sprintf("tags.zrokShareToken=\"%v\" and type=%d", shrToken, servicePolicyBind), edge)
}
func DeleteServicePolicyDial(envZId, shrToken string, edge *rest_management_api_client.ZitiEdgeManagement) error {
return DeleteServicePolicy(envZId, fmt.Sprintf("tags.zrokShareToken=\"%v\" and type=%d", shrToken, servicePolicyDial), edge)
func DeleteServicePoliciesDial(envZId, shrToken string, edge *rest_management_api_client.ZitiEdgeManagement) error {
return DeleteServicePolicies(envZId, fmt.Sprintf("tags.zrokShareToken=\"%v\" and type=%d", shrToken, servicePolicyDial), edge)
}
func DeleteServicePolicy(envZId, filter string, edge *rest_management_api_client.ZitiEdgeManagement) error {
limit := int64(1)
func DeleteServicePolicies(envZId, filter string, edge *rest_management_api_client.ZitiEdgeManagement) error {
limit := int64(0)
offset := int64(0)
listReq := &service_policy.ListServicePoliciesParams{
Filter: &filter,
@ -100,8 +100,9 @@ func DeleteServicePolicy(envZId, filter string, edge *rest_management_api_client
if err != nil {
return err
}
if len(listResp.Payload.Data) == 1 {
spId := *(listResp.Payload.Data[0].ID)
logrus.Infof("found %d service policies to delete for '%v'", len(listResp.Payload.Data), filter)
for i := range listResp.Payload.Data {
spId := *(listResp.Payload.Data[i].ID)
req := &service_policy.DeleteServicePolicyParams{
ID: spId,
Context: context.Background(),
@ -112,8 +113,9 @@ func DeleteServicePolicy(envZId, filter string, edge *rest_management_api_client
return err
}
logrus.Infof("deleted service policy '%v' for environment '%v'", spId, envZId)
} else {
logrus.Infof("did not find a service policy")
}
if len(listResp.Payload.Data) < 1 {
logrus.Warnf("did not find any service policies to delete for '%v'", filter)
}
return nil
}

View File

@ -1,5 +1,5 @@
# this builds docker.io/openziti/zrok
FROM registry.access.redhat.com/ubi8/ubi-minimal
FROM docker.io/openziti/ziti-cli:0.27.9
# This build stage grabs artifacts that are copied into the final image.
# It uses the same base as the final image to maximize docker cache hits.
@ -20,7 +20,7 @@ LABEL name="openziti/zrok" \
USER root
### add licenses to this directory
RUN mkdir -m0755 /licenses
RUN mkdir -p -m0755 /licenses
COPY ./LICENSE /licenses/apache.txt
RUN mkdir -p /usr/local/bin

View File

@ -5,7 +5,7 @@ sidebar_position: 200
## Self-Hosted
`zrok` is not limited to a managed offering. You can [host your own](../guides/self-hosting/v0.3_self_hosting_guide.md) instance of `zrok` as well. `zrok` is
`zrok` is not limited to a managed offering. You can [host your own](../guides/self-hosting/self_hosting_guide.md) instance of `zrok` as well. `zrok` is
also freely available as open source software hosted by GitHub under a very permissive Apache v2 license.
## Managed Service

View File

@ -430,7 +430,7 @@ You use the `zrok reserve` command to create _reserved shares_. Reserved shares
## Self-Hosting a Service Instance
Interested in self-hosting your own `zrok` service instance? See the [self-hosting guide](./guides/self-hosting/v0.3_self_hosting_guide.md) for details.
Interested in self-hosting your own `zrok` service instance? See the [self-hosting guide](./guides/self-hosting/self_hosting_guide.md) for details.
[openziti]: https://docs.openziti.io/docs/learn/introduction/ "OpenZiti"
[ zrok-download]: https://zrok.io "Zrok Download"

View File

@ -0,0 +1,7 @@
{
"label": "Metrics and Limits",
"position": 40,
"link": {
"type": "generated-index"
}
}

View File

@ -0,0 +1,84 @@
# Configuring Limits
> If you have not yet configured [metrics](configuring-metrics.md), please visit the [metrics guide](configuring-metrics.md) first before working through the limits configuration.
The limits facility in `zrok` is responsible for controlling the number of resources in use (environments, shares) and also for ensuring that any single account, environment, or share is held below the configured thresholds.
Take this `zrok` controller configuration stanza as an example:
```yaml
limits:
enforcing: true
cycle: 1m
environments: -1
shares: -1
bandwidth:
per_account:
period: 5m
warning:
rx: -1
tx: -1
total: 7242880
limit:
rx: -1
tx: -1
total: 10485760
per_environment:
period: 5m
warning:
rx: -1
tx: -1
total: -1
limit:
rx: -1
tx: -1
total: -1
per_share:
period: 5m
warning:
rx: -1
tx: -1
total: -1
limit:
rx: -1
tx: -1
total: -1
```
## The Global Controls
The `enforcing` boolean will globally enable or disable limits for the controller.
The `cycle` value controls how frequently the limits system will look for limited resources to re-enable.
## Resource Limits
The `environments` and `shares` values control the number of environments and shares that are allowed per-account. Any limit value can be set to `-1`, which means _unlimited_.
## Bandwidth Limits
The `bandwidth` section is designed to provide a configurable system for controlling the amount of data transfer that can be performed by users of the `zrok` service instance. The bandwidth limits are configurable for each share, environment, and account.
`per_account`, `per_environment`, and `per_share` are all configured the same way:
The `period` specifies the time window for the bandwidth limit. See the documentation for [`time.Duration.ParseDuration`](https://pkg.go.dev/time#ParseDuration) for details about the format used for these durations. If the `period` is set to 5 minutes, then the limits implementation will monitor the send and receive traffic for the resource (share, environment, or account) for the last 5 minutes, and if the amount of data is greater than either the `warning` or the `limit` threshold, action will be taken.
The `rx` value is the number of bytes _received_ by the resource. The `tx` value is the number of bytes _transmitted_ by the resource. And `total` is the combined `rx`+`tx` value.
If the traffic quantity is greater than the `warning` threshold, the user will receive an email notification letting them know that their data transfer size is rising and will eventually be limited (the email details the limit threshold).
If the traffic quantity is greater than the `limit` threshold, the resources will be limited until the traffic in the window (the last 5 minutes in our example) falls back below the `limit` threshold.
### Limit Actions
When a resource is limited, the actions taken differ depending on what kind of resource is being limited.
When a share is limited, the dial service policies for that share are removed. No other action is taken. This means that public frontends will simply return a `404` as if the share is no longer there. Private frontends will also return `404` errors. When the limit is relaxed, the dial policies are put back in place and the share will continue operating normally.
When an environment is limited, all of the shares in that environment become limited, and the user is not able to create new shares in that environment. When the limit is relaxed, all of the share limits are relaxed and the user is again able to add shares to the environment.
When an account is limited, all of the environments in that account become limited (limiting all of the shares), and the user is not able to create new environments or shares. When the limit is relaxed, all of the environments and shares will return to normal operation.
## Unlimited Accounts
The `accounts` table in the database includes a `limitless` column. When this column is set to `true` the account is not subject to any of the limits in the system.

View File

@ -0,0 +1,117 @@
# Configuring Metrics
A fully configured, production-scale `zrok` service instance looks like this:
![zrok Metrics Architecture](images/metrics-architecture.png)
`zrok` metrics builds on top of the `fabric.usage` event type from OpenZiti. The OpenZiti controller has a number of way to emit events. The `zrok` controller has several ways to consume `fabric.usage` events. Smaller installations could be configured in these ways:
![zrok simplified metrics architecture](images/metrics-architecture-simple.png)
Environments that horizontally scale the `zrok` control plane with multiple controllers should use an AMQP-based queue to "fan out" the metrics workload across the entire control plane. Simpler installations that use a single `zrok` controller can collect `fabric.usage` events from the OpenZiti controller by "tailing" the events log file, or collecting them from the OpenZiti controller's websocket implementation.
## Configuring the OpenZiti Controller
> This requires a version of OpenZiti with a `fabric` dependency of `v0.22.52` or newer, which is satisfed by the `v0.27.6` release of OpenZiti Controller.
Emitting `fabric.usage` events to a file is currently the most reliable mechanism to capture usage events into `zrok`. We're going to configure the OpenZiti controller to append `fabric.usage` events to a file, by adding this stanza to the OpenZiti controller configuration:
```yaml
events:
jsonLogger:
subscriptions:
- type: fabric.usage
version: 3
handler:
type: file
format: json
path: /tmp/fabric-usage.json
```
You'll want to adjust the `events/jsonLogger/handler/path` to wherever you would like to send these events for ingestion into `zrok`. There are additional OpenZiti options that control file rotation. Be sure to consult the OpenZiti docs to tune these settings to be appropriate for your environment.
By default, the OpenZiti events infrastructure reports and batches events in 1 minute buckets. 1 minute is too large of an interval to provide a snappy `zrok` metrics experience. So, let's increase the frequency to every 5 seconds. Add this to the `network` stanza of your OpenZiti controller's configuration:
```yaml
network:
intervalAgeThreshold: 5s
metricsReportInterval: 5s
```
And you'll want to add this stanza to the tail-end of the router configuration for every router on your OpenZiti network:
```yaml
# this must be the last router configuration stanza
metrics:
reportInterval: 5s
intervalAgeThreshold: 5s
```
Be sure to restart all of the components of your OpenZiti network after making these configuration changes.
## Configuring the zrok Metrics Bridge
`zrok` currently uses a "metrics bridge" component (running as a separate process) to consume the `fabric.usage` events from the OpenZiti controller, and publish them onto an AMQP queue. Add a stanza like the following to your `zrok` controller configuration:
```yaml
bridge:
source:
type: fileSource
path: /tmp/fabric-usage.json
sink:
type: amqpSink
url: amqp://guest:guest@localhost:5672
queue_name: events
```
This configuration consumes the `fabric.usage` events from the file we previously specified in our OpenZiti controller configuration, and publishes them onto an AMQP queue.
### RabbitMQ
For this example, we're going to use RabbitMQ as our AMQP implementation. The stock, default RabbitMQ configuration, launched as a `docker` container will work just fine:
```
$ docker run -it --rm --name rabbitmq -p 5672:5672 -p 15672:15672 rabbitmq:3.11-management
```
Once RabbitMQ is running, you can start the `zrok` metrics bridge by pointing it at your `zrok` controller configuration, like this:
```
$ zrok ctrl metrics bridge <path/to/zrok-controller.yaml>
```
## Configuring zrok Metrics
Configure the `metrics` section of your `zrok` controller. Here is an example:
```yaml
metrics:
agent:
source:
type: amqpSource
url: amqp://guest:guest@localhost:5672
queue_name: events
influx:
url: "http://127.0.0.1:8086"
bucket: zrok # the bucket and org must be
org: zrok # created in advance in InfluxDB
token: "<secret token>"
```
This configures the `zrok` controller to consume usage events from the AMQP queue, and configures the InfluxDB metrics store. The InfluxDB organization and bucket must be created in advance. The `zrok` controller will not create these for you.
## Testing Metrics
With all of the components configured and running, either use `zrok test loop` or manually create share(s) to generate traffic on the `zrok` instance. If everything is working correctly, you should see log messages from the controller like the following, which indicate that that the controller is processing OpenZiti usage events, and generating `zrok` metrics:
```
[5339.658] INFO zrok/controller/metrics.(*influxWriter).Handle: share: 736z80mr4syu, circuit: Ad1V-6y48 backend {rx: 4.5 kB, tx: 4.6 kB} frontend {rx: 4.6 kB, tx: 4.5 kB}
[5349.652] INFO zrok/controller/metrics.(*influxWriter).Handle: share: 736z80mr4syu, circuit: Ad1V-6y48 backend {rx: 2.5 kB, tx: 2.6 kB} frontend {rx: 2.6 kB, tx: 2.5 kB}
[5354.657] INFO zrok/controller/metrics.(*influxWriter).Handle: share: 5a4u7lqxb7pa, circuit: iG1--6H4S backend {rx: 13.2 kB, tx: 13.3 kB} frontend {rx: 13.3 kB, tx: 13.2 kB}
```
The `zrok` web console should also be showing activity for your share(s) like the following:
![zrok web console activity](images/zrok-console-activity.png)
With metrics configured, you might be interested in [configuring limits](configuring-limits.md).

View File

@ -0,0 +1,70 @@
<mxfile host="Electron" modified="2023-04-04T16:56:44.671Z" agent="Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) draw.io/21.1.2 Chrome/106.0.5249.199 Electron/21.4.3 Safari/537.36" etag="hNOxKmEJVuYIWfjZN-Q2" version="21.1.2" type="device">
<diagram name="Page-1" id="IMoEC3u-7S6gkD3jGaqt">
<mxGraphModel dx="1030" dy="801" grid="1" gridSize="10" guides="1" tooltips="1" connect="1" arrows="1" fold="1" page="1" pageScale="1" pageWidth="600" pageHeight="400" math="0" shadow="0">
<root>
<mxCell id="0" />
<mxCell id="1" parent="0" />
<mxCell id="z8BNBxY42kQ6VSPeSeC1-1" value="Ziti&lt;br&gt;Controller" style="ellipse;shape=cloud;whiteSpace=wrap;html=1;" vertex="1" parent="1">
<mxGeometry x="40" y="50" width="120" height="80" as="geometry" />
</mxCell>
<mxCell id="z8BNBxY42kQ6VSPeSeC1-2" value="events.json" style="shape=document;whiteSpace=wrap;html=1;boundedLbl=1;" vertex="1" parent="1">
<mxGeometry x="190" y="65" width="80" height="50" as="geometry" />
</mxCell>
<mxCell id="z8BNBxY42kQ6VSPeSeC1-3" value="" style="endArrow=classic;html=1;rounded=0;exitX=0.875;exitY=0.5;exitDx=0;exitDy=0;exitPerimeter=0;entryX=0;entryY=0.5;entryDx=0;entryDy=0;" edge="1" parent="1" source="z8BNBxY42kQ6VSPeSeC1-1" target="z8BNBxY42kQ6VSPeSeC1-2">
<mxGeometry width="50" height="50" relative="1" as="geometry">
<mxPoint x="280" y="280" as="sourcePoint" />
<mxPoint x="330" y="230" as="targetPoint" />
</mxGeometry>
</mxCell>
<mxCell id="z8BNBxY42kQ6VSPeSeC1-15" value="zrok&lt;br&gt;Metrics Store&lt;br&gt;&lt;font style=&quot;font-size: 9px;&quot;&gt;(InfluxDB)&lt;/font&gt;" style="shape=cylinder3;whiteSpace=wrap;html=1;boundedLbl=1;backgroundOutline=1;size=15;" vertex="1" parent="1">
<mxGeometry x="471" y="40" width="90" height="100" as="geometry" />
</mxCell>
<mxCell id="z8BNBxY42kQ6VSPeSeC1-17" value="" style="endArrow=classic;startArrow=classic;html=1;rounded=0;entryX=1;entryY=0.5;entryDx=0;entryDy=0;exitX=0;exitY=0.5;exitDx=0;exitDy=0;exitPerimeter=0;" edge="1" parent="1" source="z8BNBxY42kQ6VSPeSeC1-15" target="z8BNBxY42kQ6VSPeSeC1-11">
<mxGeometry width="50" height="50" relative="1" as="geometry">
<mxPoint x="501" y="284" as="sourcePoint" />
<mxPoint x="551" y="234" as="targetPoint" />
</mxGeometry>
</mxCell>
<mxCell id="z8BNBxY42kQ6VSPeSeC1-18" value="" style="endArrow=classic;html=1;rounded=0;exitX=1;exitY=0.5;exitDx=0;exitDy=0;" edge="1" parent="1" source="z8BNBxY42kQ6VSPeSeC1-2" target="z8BNBxY42kQ6VSPeSeC1-11">
<mxGeometry width="50" height="50" relative="1" as="geometry">
<mxPoint x="190" y="230" as="sourcePoint" />
<mxPoint x="240" y="180" as="targetPoint" />
</mxGeometry>
</mxCell>
<mxCell id="z8BNBxY42kQ6VSPeSeC1-19" value="Ziti&lt;br&gt;Controller" style="ellipse;shape=cloud;whiteSpace=wrap;html=1;" vertex="1" parent="1">
<mxGeometry x="98" y="270" width="120" height="80" as="geometry" />
</mxCell>
<mxCell id="z8BNBxY42kQ6VSPeSeC1-24" value="zrok&lt;br&gt;Metrics Store&lt;br&gt;&lt;font style=&quot;font-size: 9px;&quot;&gt;(InfluxDB)&lt;/font&gt;" style="shape=cylinder3;whiteSpace=wrap;html=1;boundedLbl=1;backgroundOutline=1;size=15;" vertex="1" parent="1">
<mxGeometry x="413" y="260" width="90" height="100" as="geometry" />
</mxCell>
<mxCell id="z8BNBxY42kQ6VSPeSeC1-25" value="" style="endArrow=classic;startArrow=classic;html=1;rounded=0;entryX=1;entryY=0.5;entryDx=0;entryDy=0;exitX=0;exitY=0.5;exitDx=0;exitDy=0;exitPerimeter=0;" edge="1" parent="1" source="z8BNBxY42kQ6VSPeSeC1-24" target="z8BNBxY42kQ6VSPeSeC1-23">
<mxGeometry width="50" height="50" relative="1" as="geometry">
<mxPoint x="443" y="504" as="sourcePoint" />
<mxPoint x="493" y="454" as="targetPoint" />
</mxGeometry>
</mxCell>
<mxCell id="z8BNBxY42kQ6VSPeSeC1-23" value="zrok&lt;br&gt;Controller" style="rounded=1;whiteSpace=wrap;html=1;" vertex="1" parent="1">
<mxGeometry x="252" y="280" width="120" height="60" as="geometry" />
</mxCell>
<mxCell id="z8BNBxY42kQ6VSPeSeC1-29" value="" style="endArrow=classic;html=1;rounded=0;exitX=0.875;exitY=0.5;exitDx=0;exitDy=0;exitPerimeter=0;entryX=0;entryY=0.5;entryDx=0;entryDy=0;" edge="1" parent="1" source="z8BNBxY42kQ6VSPeSeC1-19" target="z8BNBxY42kQ6VSPeSeC1-23">
<mxGeometry width="50" height="50" relative="1" as="geometry">
<mxPoint x="198" y="462" as="sourcePoint" />
<mxPoint x="248" y="412" as="targetPoint" />
</mxGeometry>
</mxCell>
<mxCell id="z8BNBxY42kQ6VSPeSeC1-30" value="" style="endArrow=none;dashed=1;html=1;dashPattern=1 3;strokeWidth=2;rounded=0;" edge="1" parent="1">
<mxGeometry width="50" height="50" relative="1" as="geometry">
<mxPoint x="220" y="310" as="sourcePoint" />
<mxPoint x="250" y="230" as="targetPoint" />
</mxGeometry>
</mxCell>
<mxCell id="z8BNBxY42kQ6VSPeSeC1-31" value="Events over Websocket" style="text;html=1;strokeColor=none;fillColor=none;align=center;verticalAlign=middle;whiteSpace=wrap;rounded=0;fontSize=9;" vertex="1" parent="1">
<mxGeometry x="200" y="210" width="100" height="20" as="geometry" />
</mxCell>
<mxCell id="z8BNBxY42kQ6VSPeSeC1-11" value="zrok&lt;br&gt;Controller" style="rounded=1;whiteSpace=wrap;html=1;" vertex="1" parent="1">
<mxGeometry x="310" y="60" width="120" height="60" as="geometry" />
</mxCell>
</root>
</mxGraphModel>
</diagram>
</mxfile>

Binary file not shown.

After

Width:  |  Height:  |  Size: 33 KiB

Some files were not shown because too many files have changed in this diff Show More