Sdn+K8s Routing Optimization Strategy in 5G Cloud Edge Collaboration Scenario

The cloud-edge-collaboration framework in the 5G scenario emerged as the times developing. The horizontal and vertical disassembly of computing power is particularly important, relying on the full life cycle resource allocation and flexible configuration of SDN+K8s+microservice. This paper proposes a new cloud native architecture form under the 5g cloud edge collaboration scenario. At the same time, it optimizes the SDN routing strategy at multiple levels, decouples and integrates context boundaries with the micro service sevicemesh at multiple levels. For the underlying computing power and network form, it performs multi threshold detection and global route optimization based on k8s + Sdn. Experiments prove the stability and reliability of the 5g + cloud edge collaboration architecture and the progressiveness of route optimization.


I. INTRODUCTION
The edge computing node, located at the edge of thef network near the source of objects or data, is the core part of the industry's digital intelligent system. It is responsible for perceiving the physical world, realizing the digital modeling, cognition and decision-making of the physical world through the collaboration between the edge and the central cloud, and then the edge feeds back the decision results to the physical world in the way of application interaction, so as to realize the closed-loop and continuous iterative evolution of the entire business process.
The main abbreviations of the article are as follows: The edge scenario has many very different characteristics compared with the ICT scenario of the central cloud, which is considered by the edge centered native application architecture (Edge Native for short): ≫ Decentralized, geographically distributed architecture. ≫ Data is mainly processed at the edge in real time, that is, computing and intelligence will dynamically deploy and process along with data.
The associate editor coordinating the review of this manuscript and approving it for publication was Mauro Fadda . ≫ Mainly event driven, streaming, reasoning, asynchronous and real-time data processing.
≫ Edge to edge interaction, edge local closed-loop autonomy.
≫ Diversified and heterogeneous resource configuration, deep integration of computing/network/storage resources VOLUME 11, 2023 This work is licensed under a Creative Commons Attribution 4.0 License. For more information, see https://creativecommons.org/licenses/by/4.0/ customized according to scenarios, highly discrete devices, and low resource utilization.
≫ Long product life cycle, coexistence of multi vendor and multi generation technologies.
The OS ecosystem of single computing node formed in history has converged, such as Windows, Linux, Android, IOS, etc., referred to as ''end OS''. The rapid development of DC centric cloud ecology in the past decade has formed ''cloud OS'', such as AWS, Azure, GCP, Alibaba Cloud, Huawei Cloud, etc. The re emergence of edge computing in the context of industry digital transformation is deeply affecting the architecture of the next generation of IT infrastructure. It is far from enough to rely on the simple extension of the existing central cloud architecture. A new cross domain distributed edge cloud collaboration middle layer (also known as ''edge cloud collaboration OS'') must be introduced from the perspective of architecture. On the one hand, it is compatible with a wide range of edge hardware, and on the other hand, it adapts the cloud services and cloud ecological capabilities owned by the central cloud to enable edge businesses to achieve end, edge Clouds can closely integrate and collaborate with each other, accelerate the construction of edge native digital transformation solutions, and provide effective resource allocation and user experience.
As for the capability connotation of edge cloud collaboration, the white paper on edge computing and cloud computing collaboration (2018) summarizes six synergies, including resource collaboration, data collaboration, intelligent collaboration, service collaboration, application management collaboration and business management collaboration. In order to better straighten out the hierarchical relationship between various collaborations and facilitate readers' understanding, six collaborations are merged into three collaborations in this white paper. Specifically, resource collaboration remains unchanged; Merge the original version of data collaboration, intelligent collaboration and service collaboration into the new version of service collaboration; Merge the application management collaboration and business management collaboration in the original version into the application collaboration in the new version. The key capabilities of the three synergies after the merger are described as follows: A. APPLICATION COLLABORATION Application collaboration enables unified registration and access of edge applications, experiences consistent distributed deployment, and centralized full lifecycle management. For the implementation practice of edge computing, application collaboration is the core of the whole system, involving cloud, edge, pipe and end [4].

B. SERVICE COLLABORATION
Service collaboration provides the required key capability components and fast and flexible docking mechanism for the construction of edge applications, thus effectively improving the construction speed of edge applications. Service collaboration includes two levels: one is platform service collaboration, and the other is platform service collaboration. On the one hand, it is the capabilities provided by cloud services and cloud ecological partners from the central cloud, including data, intelligence, and application enablers. On the other hand, it provides a service access framework based on the Operator architecture through the cloud native architecture, providing a complete set of processes for the distribution, subscription, access, discovery, use, and operation and maintenance of edge services.
On the other hand, for microservices in the cloud native architecture, it provides a service discovery and collaboration mechanism that spans edges and clouds, so that location aware data transmission can be transformed into location transparent, service-oriented business collaboration.

C. RESOURCE COLLABORATION
From the perspective of a single node, resource collaboration provides the abstraction of the underlying hardware and simplifies the development difficulty of the upper application. From a global perspective, resource collaboration also provides resource scheduling from a global perspective and the ability to dynamically accelerate the global Overlay network, enabling efficient use of edge resources and more real-time interaction between edge and edge, edge and center.
The essence of cloud computing is network computing. Cloud computing connects the logical computing resource pool through the horizontal capabilities of the network platform and realizes the service capabilities of resource on-demand scheduling and load balancing [1]. As IT dual engines, computing and network have achieved the rapid development of information communication today [2]. But objectively speaking, the two technology engines are unbalanced. Today's cloud computing resources and on-demand scheduling and deployment force the traditional network model to change. In order to cooperate with the automatic opening and second level deployment of cloud computing, network workers began to think about the evolution of the cloud network of the data center 10 years ago [3]. From the early second tier of FabricPath/Drill/OTV to today's VXLAN based lightweight overlay technology, it can be said that the cloud network of the data center has developed relatively fast, especially with the emergence of SDN technology, the on-demand service capability and automatic management of the internal network of the cloud center have been basically realized, However, the heterogeneous network environment, the mixed Overlay environment [4], and the multi domain network environment still have great challenges: objectively speaking, the multiple network protocols and the complex technology lead to the difficulty in the automatic deployment and management of the end-to-end network. On the one hand [5], the network will involve the cooperation of multiple domains: in addition to the data center network, the backbone network and the edge access network should also be considered. The cooperation and network service capabilities of multiple domains are a focus of SDN today; On the other hand, network related protocols or technologies are very complex.L2/L3/GRE/VXLAN/EVPN/MPLS/SR/OF/Netconf /Yang/Neutron/ODL/ONOS protocols or technologies are numerous. In addition, QOS/encryption/FW/NAT/traffic scheduling/TE tunnel technology, etc., it is not easy to be a good network worker these years, and it is even more difficult to design a large-scale cloud network cooperative SDN network [6].
The SDN network architecture design of cloud network collaboration needs to meet the requirements of intra cloud network, inter cloud interconnection and cloud network, manage complex multi domain and heterogeneous multiple network resource systems, and realize the collaborative work of cloud network services and one-stop management services. As the cloud network collaboration architecture involves a lot of content, the needs of different industries are different. At present, the planning and deployment of large domestic operators, cloud businesses and OTT companies in this field are relatively advanced. Taking''operator services\cloud services\online services'' as examples, the main requirements of cloud network collaboration for SDN design are briefly analyzed [7], [8].

D. UNIFIED MANAGEMENT REQUIREMENTS FOR RESOURCE POOLS IN THE CLOUD
At present, the key to the innovation of the 5G+ primary edge collaboration technology framework is to solve the problem of managing multiple virtualized resource pools (Openstack, VMware, K8S. . . ) and physical resource pools in a unified way. At the same time, the problem of choosing a unified way to manage heterogeneous network devices needs to be solved under the full link technology framework. Of course, choosing a hybrid HostOverlay or HW Overlay network becomes a key factor affecting cloud resource management and supervision.

E. COOPERATIVE SCHEDULING OF MULTI DOMAIN NETWORK SYSTEMS
This is a problem encountered by large service chambers. The cloud network involves multiple network domains, including: the backbone network includes cloud networking, the edge access network SD-WAN, the network system inside the cloud center, and the multi cloud Internet system (internal cloud resource pool network, partner cloud system). Different network technologies are used in multiple domains. How to achieve cloud resource layout suitable for cross domain deployment through SDN cloud network collaboration technology, Users can access, deploy and serve the whole network at one point [10].

F. NETWORK COLLABORATIVE NETWORK SERVICE CAPABILITY AND STANDARD SPECIFICATION
Large service providers hope to develop API specifications and Yang model to reconstruct the cloud network collaboration architecture, standardize the management of multi domain network resources and the interface specifications and standards for cloud pool docking [11]. This idea is very good, but it puts forward higher requirements for network manufacturers. However, in reality, the functions of each domain network are different and the interface specifications of manufacturers are different. It is also an interesting topic how to create a unified cloud network collaborative network service capability to uniformly call the business platform.
The network construction process of the financial industry is inseparable from the informatization process of the financial industry in the past three decades. So far, it has gone through multiple stages of development. At this stage, the overall financial industry network is dominated by the ''two places and three centers'' structure. DWDM and high bandwidth dedicated lines are used to build a high-speed forwarding ''core backbone network'' for interconnection between data centers, and the data center is used as the access node of the backbone network. On the core backbone network framework, the ''three-level network'' architecture is extended for the convergence of branches at all levels [12]. The first level network is built between the data center and the first level branches, the second level network is built between the first level branches and the second level branches, and the third level network is built between the second level branches and the third level branches.
The data center adopts a ''bus type and modular'' architecture, follows the principle of ''vertical layering and horizontal zoning'', divides the network into multiple areas according to different application systems, importance and security protection requirements, and constructs a switching bus through high-performance switches [13]. The network partitions are interconnected through the bus (see Figure 2).
The network partition can be divided into three categories: business area, isolation area and specific functional area.
(1)Business area: It is used to carry various system application servers and database servers. The application system is divided into different business areas according to specific principles.
(2)Isolation area: also known as DMZ area, it carries all kinds of front-end computers and provides services to the Internet or third-party organizations [14].
(3)Specific functional areas: such as management area bearing monitoring system, process system, operation terminal, etc., used for data center maintenance; Connection between user data center and backbone network in WAN area.
Advantages of traditional architecture: security, stability, reliability and scalability.
Disadvantages of architecture: insufficient flexibility, existing shaft barriers, low automation and high construction cost.

II. DETAILED DESCRIPTION OF THE TECHNICAL SCHEME OF THE APPLICATION PROPOSAL A. CONTENTS OF THE INVENTION
In figure 1,The design of SDN under the cloud network collaborative architecture involves many aspects. It is necessary to consider the automatic connection of end-to-end tenant VPNs between multi domains connected to the cloud, the management and collaborative deployment of network resources in different domains, the end-to-end line SLA detection and traffic scheduling, as well as the decentralized and domain based management capabilities. First, it is the network technology attribute: in traditional networks, traditional manufacturers such as Cisco have done a very good job in network basic technologies such as routing strategy, quality of service, and security; However, as a technical innovation and management mode innovation, SDN/SD-WAN design must be built on the basis of robust network basic functional capabilities, and more SDN attributes and abstract service functions are required. The author believes that the design of a good SDN under the cloud network collaborative architecture must be well designed in three aspects. Today, we will focus on these three aspects:

B. DESIGN OF SDN SOUTHBOUND COOPERATOR ADAPTATION LAYER
SDN coordinator module is responsible for the unified collection, abstraction and construction of network resources throughout the process, realizing the modeling of network resources and capabilities, and is the basic component of SDN network orchestration platform capabilities and service support. South docking of SDN collaborators: Openstack platform, backbone network controller, SD-WAN controller, data center network equipment controller, third-party cloud platform, etc.
For SDN cooperative adaptation layer, our suggestions are as follows: SDN coordinator realizes the transformation from physical network to logical network, provides unified network service capability abstraction for physical network resources, integrates heterogeneous manufacturers' equipment and realizes decoupling, which is the foundation of SDN network design. Multi domain involves a variety of network devices and technologies, controllers from a variety of manufacturers, YANG models defined by different manufacturers, command contents, and configuration methods. SDN designers must master the system integration design of the latest technologies in multiple fields, and have the ability and experience to innovate and practice.

C. SDN NETWORK SERVICE CHOREOGRAPHER MODULE
The SDN network service choreographer module relies on the unified network abstraction, logical network and network resource model provided by the SDN coordinator to provide the basic mechanism and management of all network services required by the business platform. The basic mechanisms of network services include tenant based VPC network routing, tenant based VRF virtual network and routing, layer 2 and layer 3 VPNs in the cloud data center, BGP/IGP routing, SD-WAN access and SD Core backbone integration, traffic engineering and SLA service policies and security, etc. SDN service choreographer abstracts network resources and capabilities into a unified business model and provides it to the cloud service platform. The business layer does not need to perceive any details of underlying manufacturers' equipment and is completely decoupled. For the unified business orchestration module, we put forward two suggestions: The whole system architecture of the business choreographer is split according to business and function. It is recommended to implement the service module with the minimum granularity based on the existing mature microservice architecture. Microservices can be combined as required to complete the opening of various business scenarios and achieve fully decoupled modular system architecture design.
The design of the unified service choreographer needs to be completed by professional architects who are familiar with the underlying architecture, technology and protocol of the network, and master the full stack technology of new cloud network technology and software architecture. At the same time, they need to have a deep understanding of the relationship between the customer's business and the network.

D. SDN NETWORK SERVICE NORTHBOUND API SPECIFICATION
The cloud network collaborative SDN architecture provides end-to-end network access services for data centers, private clouds, public clouds, and branches. API specification and Yang model not only need to meet the management and docking of network resources, but also need to connect and dock with multiple cloud platforms, adapt to multi domain network systems, multi cloud platforms and other programmable platforms, for unified call by the northbound business system, and provide users with one-stop access to the cloud network. Because the API business processes and call processes of cloud businesses and operators are very different, the network architecture and multi cloud resource layout of cross domain deployment are very different, and the technical difficulty of doing well in the API specification of SDN network orchestration is also very complex. For API specifications, our suggestions are as follows: Cloud business docking must be carried out as early as possible: most cloud businesses have their own non-standard API specifications, and each cloud business has very different access methods, routing designs, and APIs. Experienced developers must participate in the discussion and design as early as possible.
The API call process must be well coordinated: for example, in the resource application, it must be well coordinated whether to allow the cloud business and service providers to log on at one point or to log on at multiple points in a multi domain environment. Typical customer needs: users can access at one point, deploy at multiple points, and serve the whole network. This needs to involve multi service capabilities of cross domain deployment and multi site resource layout and unified management.

E. SDN PRACTICE SHARING UNDER CLOUD NETWORK COLLABORATION ARCHITECTURE
The cloud network collaboration architecture of large service providers analyzed earlier covers at least access, backbone network and cloud center, which not only involves the unified planning of the overall architecture but also involves multi domain technology integration. The following is a share of a customer's SDN practice deployment: the customer is a large service provider customer who wants cloud network collaborative unified management and collaborative work. Based on SDN technology, the network infrastructure is reconstructed, public cloud and private cloud resources are connected, endto-end network service automatic deployment and scheduling (SDN business orchestration) is provided, and cloud service Multiple scenarios such as cross cloud connectivity (including data center, public cloud, private cloud) and branch networking, including solving the ''last mile'' problem of cloud computing. This customer is a project of a large operator, and pays special attention to the following aspects in architecture design: Unified business orchestration from access to backbone to cloud, one-stop service.
(2)Seamless connection between SD-WAN POP and MPLS PE node.
(4)DCI L2/L3 VPN service is automatically opened, and the internal VR to GW of cloud DC is automatically connected.
(5)Networking scenarios: enterprise first-line cloud, enterprise cloud networking, enterprise cloud broadband, Internet + private line hybrid networking, etc.
(6)Pay special attention to the above points in architecture design.

F. SDN ORCHESTRATION AND SERVICE PLATFORM
In the project, we have provided customers with the capability of multi domain service orchestration, business orchestration for connecting public cloud and hybrid cloud interconnection, L2/L3VPN network business orchestration, SD-WAN access cloud and backbone network integration business orchestration, and end-to-end unified resource scheduling, SLA service quality assurance, path planning and VPN tenant security management through our orchestration and service platform, At the same time, it provides a unified northbound API specification and YANG model for multi domain services from the edge to the backbone, as well as docking with various cloud businesses, to realize the docking and decoupling of controllers from different manufacturers. It greatly simplifies the trouble of traditional customer business systems that need to call multiple network domains (shielding the diversity of various complex network resources and interface specifications in the south direction), and lays a foundation for customer business middle stations to provide cloud network collaboration capabilities. Due to the complexity of customers' business requirements, environments and processes, we have made a lot of customized development based on Terra Business Choreographer. At the same time, we have managed SDN data center controller, SD Core backbone network controller and SD-WAN edge access controller in Terra service orchestration collaboration layer. In figure 2, we focus on the following points: In the cloud center, consider that the VPC network of the cloud resource pool and the BM network of the physical resource pool have deployed a hybrid overlay networking, including Openstack Neutron linkage. The second is how to manage the VXLAN/EVPN network of multiple manufacturers and provide unified logical network service capability for the upper layer when customers purchase network equipment of multiple manufacturers in different POD areas due to bidding requirements; The third key design is to consider the collaborative management of VPC and external networks, especially backbone networks, to achieve interoperability and unified management between VPC and border routers (GW/VBR). Customers are difficult to solve or unwilling to solve the problems of traditional network manufacturers in heterogeneous environments, which are all laborious technologies. We implement the above functions based on VOLUME 11, 2023 TerraDC controller of Dadi Cloud Network, and add NFV service management for L4-L7.

G. BACKBONE NETWORK
We focus on automatic opening of L2VPN and L3VPN services and SLA and traffic engineering design of important services. The backbone network of the service provider adopts multi protocol label switching (MPLS)/segmented routing tunnel (SRTE) as the main backbone network router for networking, and connects with the cloud center and tenant branches through virtual private network (L2VPN/L3VPN). At the same time, customers use SRTE to achieve SLA service quality assurance for different gold, silver and copper services, and customers use SR+SDN to reconstruct a new generation of backbone network to achieve traffic scheduling and management. Since the PE node will extend to the data center and the public cloud, the interconnection between the PE and the border router (GW/VBR) and the automatic connection with the SD-WAN access are also the focus of this phase. We implement the above functions based on the TerraCore controller of the Dadi Cloud network.

H. EDGE ACCESS NETWORK
In the design, we should focus on the design and detection of the SD-WAN POP point networking, the design and networking of the multiline POP node, and how the POP point integrates with the MPLS of the backbone network. The CPE equipment on the access side uses the automatic detection technology to select the best POP node. In addition, in the integration with MPLS networking, we should consider how to prevent routing loops and backflow, and the perfect unification of the SD-WAN tenant and the MPLS tenant to solve the last kilometer problem of MPLS. With the epidemic this year, customers are integrating mobile office into SD-WAN system. In this part, we implement the above functions based on TerraEdge.

1) NETWORK DESIGN PRINCIPLES
The application of SDN technology is a revolutionary change to the architecture of the financial data center. Therefore, the next generation of financial data center network needs targeted design. We summarize the following four design principles for the future financial data center network: Service oriented concept, network functions are provided externally in the form of services and standard API interfaces, and the network system is self-organized internally in the form of services, so as to improve the external service capability and simplify the complexity of external call network capabilities; The management shall be arranged in a unified way. The management interface of the data center's network layer 2/3 connectivity and layer 4/7 functions shall be unified. The management of different network resource pools shall adopt a two-level management arrangement mode, that is, the bottom layer shall adapt to different network resource pool management operations, and the upper layer shall coordinate and arrange heterogeneous resources.
The resource pool is standardized, breaking the traditional network silo architecture, using the new generation of tier two technology to build the resource pool, and improving the flexibility of computing and storage resource mobilization without expanding the broadcast domain and increasing the risk of the tier two loop.
Mature technologies are integrated, recreated and innovated. Based on the requirements of smooth and compatible network technologies, basic scalable network technologies and protocols are inherited, and existing technologies are combined for integrated innovation. Innovation without losing stability ensures the overall stability of the network architecture.

2) DESIGN AND CONCEPTION OF FINANCIAL CLOUD NETWORK ARCHITECTURE MODEL a: MANAGEMENT CONTROL PLANE MODEL
Various functional components in the traditional network, such as switches, firewalls, and load balancing, usually use different device vendor solutions. The management interfaces of various products are different (CLI and UI interfaces are different), and generally do not support API interfaces. Equipment parameter setting and configuration adjustment rely heavily on labor, and require high professional skills of maintenance personnel. In this context, it is difficult to realize the service and automation of the network, and it is difficult to achieve a single tool or platform for unified management and service release of the whole network equipment.
With the development of industrial technology, network product manufacturers have gradually changed the direction of product development from the original closed equipment to the open interface. At this stage, more and more products in the industry begin to support Restful API interfaces, and the management mode is no longer limited to traditional CLI and UI interfaces.
In the new architecture of the data center, in order to achieve the goals of network service, automation, unified scheduling and other goals, in addition to requiring that network components support API interfaces, programmability and other functional characteristics, we believe that in the management control plane, it is also necessary to further abstract the API interfaces of each brand network, establish a standard network service model through the cloud network control platform, and connect with each network component. On the one hand, realize the standardization of the network service interface, unify the network external interface, shield the differences of different brand equipment interfaces, and simplify the development difficulty of the upper layer cloud management platform. On the other hand, the complex network parameters are hidden. The technical implementation and parameter settings of various network components are still completed by professional network engineers. The upper layer of the cloud management platform focuses on service processes, business orchestration, workflow scheduling, network standard service invocation, etc. (see Figure 3). The cloud network control platform can be divided into two layers, namely, the service abstraction layer and the driver control layer.
The service abstraction layer is responsible for abstracting network resources into standard network services and models, such as providing network creation and router service, and providing standard network service API interfaces for the upper platform.
The control driver layer is responsible for implementing standard network services on specific products, translating the upper layer API interfaces into recognizable interfaces of network components, and adjusting the parameters of network components.

b: SWITCHING NETWORK MODEL
In Figure 4, the traditional switching network is more stable but less flexible and efficient. The computing, storage, network, computer room physical environment and other resources between network partitions are all in exclusive mode. The computing hosts between different partitions cannot share resources. Virtual machines are not allowed to drift between host computers in different partitions, and the utilization rate of computing resources decreases. The switching network and various functional components of small and medium-sized financial institutions generally use 10 gigabit equipment, which has strong performance. However, due to the consideration of security, reliability and compliance, the number of network partitions in the data center of financial institutions cannot be reduced, and the utilization rate of network resources is generally low in the case of small customer groups and small transaction volume. In terms of space utilization rate of the computer room, the placement of various equipment needs to take into account cabling, TOR switch deployment, power, refrigeration and other factors, which is difficult to achieve flexible deployment. If traditional technologies, such as Layer 2, are used to solve the above problems, the broadcast domain will be enlarged and the risk of Layer 2 loop will increase, which will cause huge pressure on subsequent operation and maintenance.
Therefore, we believe that the new switching network architecture needs to improve the utilization of computing, storage, and computer room space resources without increasing the operation and maintenance risk. At the same time, the concept of tenants is introduced to isolate and divide a switching network. For a single financial institution, it can be used to implement the consolidated deployment of some traditional network partitions, or it can be used for the consolidated deployment of multiple financial tenants.
The new switching network architecture of the financial data center still adopts the bus architecture, retains the traditional partitions and traditional application access, and adds multiple cloud network partitions for new application launch or stock application migration. On the premise that the risk is controllable, some network areas with the same functions are consolidated and deployed. For example, multiple business areas are merged into one cloud network partition, and multiple isolation areas are merged into another cloud network partition (see Figure 4).
A cloud network partition is composed of SDN devices of the same brand. Vxlan is used internally to separate the underlay and overlay networks, so as to realize the decoupling of the network physical architecture and logical architecture. The physical structure of Spine+ Leaf is adopted, in which the computing Leaf accesses the computing server resources of virtual machines, the network function Leaf accesses the network element service equipment resources such as load balancing and firewall, the Border Leaf is responsible for the interconnection with the core switching equipment of the data center, the Spine device is responsible for the traffic interworking between the Leafs in the cloud network partition, and each cloud network partition is managed and controlled by its own controller.

c: INTERCONNECTION MODEL BETWEEN PARTITIONS
In figure 5, the core switching network of the data center is networked by independent switching equipment. Different cloud network partitions may use different SDN solutions, VOLUME 11, 2023 protocols and technologies. The Vxlan label inside the cloud network partition is stripped after the data packet leaves the region. Therefore, interoperability between cloud network partitions cannot be achieved at the Overlay level.
The challenge of the networking model and functional design in the data center above lies in how to identify the tenant traffic that exists in different cloud network partitions, so as to ensure that the cloud network partition can correctly forward the IP address reused multi tenant traffic to the correct tenant resources after passing the core switching network. After evaluation, we believe that the VRF (virtual route forwarding table) or MPLS-VPN (virtual network based on multi protocol label switching) technologies in traditional technologies can well solve the problem of tenant information transmission between multiple cloud network partitions. A VRF or VPN network can be abstracted into a regional interconnection router to connect different logical regions of the same tenant.
At the same time, we propose a regional interconnection SDN control technology to achieve the coordinated control of the core switching network and the SDN controllers of each cloud network partition, and achieve the tunneling capability of multi tenant information identification transmission.
In the corresponding control plane, we designed an RI controller to realize the automatic configuration of the core switching network, which includes two functions: one is to dynamically create the configuration VRF and its related configurations according to the tenant changes; the other is to dynamically publish the routing address in the case of dynamic changes in the resources of different SDN cloud partitions, so as to maintain the connectivity of dynamically transformed network resources.
The equipment of the core switching network is managed through the netconf protocol. At the same time, the dynamic resource information of the corresponding cloud network partition is read regularly or triggered by the API interfaces of different SDN controllers. In the process of creating new tenants across heterogeneous SDNs, the corresponding VRF channels will be automatically created according to the design of the data forwarding plane mentioned above. When new network segment resources are created, the RI controller will be triggered to query the new network segment information through the SDN control API, and update the VRF routing table information in the core exchange network through OSPF dynamic route injection. At the same time, the RI controller designed by us will regularly query the SDN network resource information, Compare the deleted network resource information and clean up the routing table.
In the specific R&D process, we took into account the different operation management protocols supported by the core network equipment and the new tunnel protocol application methods proposed by different partners in the future. Therefore, in terms of code architecture, we supported different equipment control methods and the expansion capabilities of tunnel protocols, so as to improve and optimize them in the future.

d: FIREWALL AND LOAD BALANCING MODEL
Firewall and load balancing are used to provide Layer 4 to Layer 7 network services, achieve security isolation between logical regions, and share server traffic. In Figure6, in the financial cloud network architecture model, hardware resources such as firewalls and load balancers can be pooled and deployed and scheduled as needed. The unified management of firewall and load balancing resource pool is realized through the cloud control platform. With the maturity of VNF technology, in the future, load balancing and firewalls will be connected to different business logic areas as VNFs to achieve reasonable scheduling and scheduling of traffic.

3) CONCEPTION OF THE MODEL OF ''TWO PLACES AND THREE CENTERS''
Under the highly available scenario of ''two places and three centers'', the network services of the financial industry cloud platform must also support the network multi tenant capability across data centers. In the design, we can follow the design idea of the existing backbone network, and introduce MPLS VPN technology to introduce the traffic of the same tenant into a VPN network, so that the resources that can be called by a single tenant can cover all data centers, and support the access of branches. At the same time, MP-BGP's rich routing capability and QoS technology are used to realize the transfer of tenant information across the data center, and the mutual isolation between tenants and application traffic.

III. CONCLUSION
The cloud edge collaboration framework in 5G scenarios came into being. The horizontal and vertical decomposition VOLUME 11, 2023 of computing power is particularly important, which depends on the life-cycle resource allocation and flexible configuration of SDN+K8s+ microservices. In this paper, we propose a new form of cloud native architecture for 5G cloud edge collaboration scenarios. At the same time, it optimizes the SDN routing strategy at multiple levels, and decouples and integrates the context boundary with the micro-service service grid at multiple levels. For the underlying computing power and network form, it performs multi-threshold detection and global routing optimization based on K8S +Sdn. The experiments prove the stability and reliability of 5G + cloud edge collaboration architecture and the advanced nature of routing optimization.