Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Supports running on multiple clouds #471

Closed
18 of 26 tasks
JervyShi opened this issue Apr 13, 2022 · 7 comments
Closed
18 of 26 tasks

Supports running on multiple clouds #471

JervyShi opened this issue Apr 13, 2022 · 7 comments

Comments

@JervyShi
Copy link
Member

JervyShi commented Apr 13, 2022

2022.6.30 Update:
Roadmap


English Version:
Layotto was built with a vision to provide applications with the ability to Write once, Running on any Cloud..

Layotto has built a number of APIs and the components corresponding to those APIs, and has quick start documentation to help users get up and running quickly and locally. But for an application to run locally is not enough, the application needs to be deployed on the cloud to provide production level services and Layotto needs to have the ability to run on several cloud platforms.

The initial expectation is that Layotto can be built to run directly on Aliyun, AWS, and enable the component docking capabilities of the corresponding cloud services according to the specified configuration.

Deployment capabilities can be built in phases:

  • Directly installed and running capability within ECS
  • Kubernetes-based pull-up capability (considering Sidecar support option)

中文版本:
Layotto 建设之初就有一个愿景,期望提供给应用 Write once, Running on any Cloud. 的能力。

目前 Layotto 已经建设了很多 API,以及这些 API 对应的组件,并且有快速开始文档可以帮助用户快速的在本地运行起来。但对于一个应用来讲,仅在本地跑起来是不够的,应用需要部署在云上提供生产级服务,Layotto 也需要有能在若干云平台上运行起来的能力。

初期期望 Layotto 可以建设在 Aliyun 、AWS 上直接运行,并能根据指定配置启用对应云服务的组件对接能力。

部署能力可以分阶段来建设:

  • ECS 内直接安装运行的能力
  • 基于 Kubernetes 拉起运行的能力(考虑 Sidecar 支持方案)
@seeflood
Copy link
Member

seeflood commented Apr 16, 2022

需求背景:

sky computing
https://www.jianshu.com/p/f6ea78bef4d3
https://sigops.org/s/conferences/hotos/2021/papers/hotos21-s02-stoica.pdf

拆解一下需求

image

message GetStateRequest {
  // Required. The name of state store.
  string store_name = 1;

  // Required. The key of the desired state
  string key = 2;

  // (optional) read consistency mode
  StateOptions.StateConsistency consistency = 3;

  // (optional) The metadata which will be sent to state store components.
  map<string, string> metadata = 4;
}

得像 dapr 一样 既有name 也有type
https://docs.dapr.io/reference/components-reference/supported-state-stores/setup-redis/
image

  • 样板间开发
    (选一个系统,帮他部署到多个云上)
    - skywalking demo server
    依赖db,而且还天生支持多种db,不需要layotto

    • apollo demo server?
      看了下只依赖 db,可能缺少说服力
    • istio bookinfo demo?
      感觉可行,需要调研下,看看给bookinfo加哪些功能
  • 减少侵入性

@yanggeorge
Copy link

期待

@seeflood
Copy link
Member

seeflood commented Apr 21, 2022

跨云部署”样板间“设计

交互设计

魔改 Istio 的 Bookinfo 演示工程,加入调用 对象存储、pubsub 的功能
image

架构图:
image

katacoda 教程

拿mosn的改一改,见 https://katacoda.com/mosn/courses/istio
仓库在 https://github.com/mosn/mosn-tutorial

phase 0. 服务网格流量治理

  • case 1: 服务网格流量治理
    拿 layotto 跑 istio 流量治理案例

phase 1. 跨云移植、跨组件的流量治理

  • case 2: 跨云切换组件
    可以随意给 app 切换组件, 比如对象存储先用 aws s3, 后换成 阿里云oss;比如 pubsub 先用 rocketMQ,再换成 aws 的 MQ组件

  • case 3: 跨云路由
    动态根据规则路由 ( 比如admin 用户访问aws s3, 张三用户访问阿里云 oss)

需要开发一个复合组件,支持配规则,根据 header 或 metadata 路由

  • case 4. 跨云容灾
    演示容灾切换。正常情况下用 aws s3, 但演示环境每隔5分钟会故障注入,故障期间自动切换成 阿里云oss
    image

需要开发对upstream 的流量治理。这个比较难做
image

理论上可以把每个组件”伪装“成 istio 的一个 service, 这样就能用 istio 治理 所有layotto api 的流量。但估计比较难

  • case 5. 跨云 merge
    比如图片列表页,同时展示两个oss的图片,merge 两个云的 对象存储里的数据
    比如 merge 两个云的 mq 消息

需要开发一个复合组件

Phase 2. 空计算,移动计算

image

image

Phase 3. 空计算,移动数据

演示站点

在一个或两个云上部署 BookInfo 项目,演示跨云容灾之类的效果。
我个人觉得不是很有必要
image

@seeflood
Copy link
Member

seeflood commented Apr 23, 2022

交互设计 v0.2

还是给 Istio 的 Bookinfo 加功能,加入调用 对象存储之类的功能:
image

还是让用户在katacoda 实验室里部署 BookInfo。

但是架构变成:加入一个 preview service, 我们提前部署在两个云上.
用户在实验室环境启动bookInfo 集群后,由 product page 通过公网 http 调 preview service,并且能通过 istio 做流量治理,比如对 aws 的服务故障注入,则自动 failover 到 阿里云的服务;比如张三用户路由到阿里云,admin路由到aws
image

pros:
省事,既展示了跨云部署,流量治理又是用现成的功能、不用开发

cons:
这个场景不需要对组件做流量治理。没有对组件的流量治理,就不好实现“谁便宜就调谁”的效果:

对组件的流量治理

所以就有第二个问题:我们究竟要不要做”对组件的流量治理“?
个人觉得有必要,因为不是每个组件都实现了流量治理功能(比如指数退避之类的重试策略,比如熔断,比如故障注入,比如容灾failover) ,在runtime 层面做个通用的功能比较好。dapr 也在做,不过目前功能简单,见 https://docs.dapr.io/operations/resiliency/

可以在后续迭代再讨论加入”对组件的流量治理“功能,现在先 start from simple, 收集用户需求

milestone

phase 1. 在两个云上部署 preview service

phase 2. katacoda 教程

  • case 1. 演示istio 流量治理
  • case 2. 在 实验室内部署preview service, minIO 和cache, 通过 layotto 调用
  • case 3. 业务上云:逐渐切流到aws 上的preview service
  • case 4. 业务跨云:按规则路由到aws 的preview service 和 alicloud的 preview service
  • case 5. 跨云容灾:故障注入,自动fo 到另一个云上的preview service

@github-actions
Copy link

This issue has been automatically marked as stale because it has not had recent activity in the last 30 days. It will be closed in the next 7 days unless it is tagged (pinned, good first issue or help wanted) or other activity occurs. Thank you for your contributions.

@github-actions github-actions bot added the stale label Aug 20, 2022
@github-actions
Copy link

This issue has been automatically closed because it has not had activity in the last 37 days. If this issue is still valid, please ping a maintainer and ask them to label it as pinned, good first issue or help wanted. Thank you for your contributions.

@seeflood seeflood reopened this Sep 27, 2022
@github-actions
Copy link

github-actions bot commented Oct 6, 2022

This issue has been automatically closed because it has not had activity in the last 37 days. If this issue is still valid, please ping a maintainer and ask them to label it as pinned, good first issue or help wanted. Thank you for your contributions.

@github-actions github-actions bot closed this as completed Oct 6, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

3 participants