Blog

OEM

跨平台開發:Android、iOS、Web都不在話下

在這個多元設備的時代,吸引玩家不只是一個平台的事情。我們了解這一點,精通Android和iOS平台的開發,確保你的遊戲能夠在各種設備上盡顯光芒。同時我們可將遊戲導出Web平台,讓玩家可以在瀏覽器中輕鬆體驗遊戲樂趣。

遊戲代工充滿了風險,市場上充斥著報低價格但最後結不了案的狀況,為了避免雙方的認知誤差導致結果不盡理想,我們會先溝通完需求後再進行報價,同時協助您了解內在的訴求。

Embracing Diversity in Gaming: A Multifaceted Development Approach

In an era of diverse devices, attracting players goes beyond a single platform. Recognizing this, we specialize in Android and iOS development, ensuring your game shines on various devices. Moreover, we offer the flexibility to export games to the Web platform, allowing players to effortlessly experience the joy of gaming directly in their browsers.

Game outsourcing is fraught with risks, with the market saturated with low-cost providers that often fail to deliver promised results. To avoid misunderstandings and ensure optimal outcomes for both parties, we prioritize thorough communication of requirements before providing a quote. Additionally, we assist you in understanding the underlying needs, fostering a shared understanding of the project from the outset.

Key Considerations for Cross-Server Operations: Tackling the Challenges of Multi-Server Collaboration

As user numbers and interaction frequency grow rapidly, the processing capacity of a single server will eventually hit its limit. At that point, adopting a multi-server architecture becomes essential to support business scalability. However, the complexity of multi-server collaboration far surpasses single-server multi-threaded development. Developers must address challenges such as network latency, data consistency, and operational timing.

Traditional vs. Modern Solutions

Traditional Approach: Callback Function Handling

Early cross-server collaboration often relied on callback functions to manage operations:

-Data Packet Transmission**: Operations were encapsulated into data packets and sent to the target server for processing.

-Callback Mechanism**: Callback functions were registered to trigger subsequent operations upon receiving processing results.

Challenges:

-Heavy Development Burden**: Developers had to manually save context data, handle multi-layered logic, and maintain data consistency.

-Scattered Code**: Development logic was fragmented across multiple sections, increasing maintenance difficulty.

Modern Approach: Coroutines and Lambda Wrappers

With technological advancements, coroutines and Lambda functions have provided more efficient solutions for cross-server operations:

– Coroutines retain context data, reducing the need for manual preservation.

– Lambda syntax consolidates operational logic into a single block, improving code readability.

Limitations:

While coroutines simplify logic management, maintaining data consistency in multi-threaded environments still relies on developers. Additionally, dealing with complex, deeply nested cross-server operations may lead to maintenance difficulties due to excessive nesting.

Core Challenges of Cross-Server Operations

1. Server Failures

When an operation is forwarded to another server, the original server may not immediately detect the target server’s status. If the target server crashes, the operation fails, potentially disrupting the overall process.

– Solution: Implement timeout mechanisms and establish transaction systems to ensure data consistency and recoverability.

2. Data Conflicts

Operations across servers may experience data contention due to network latency, especially in high-concurrency scenarios.

-Solution: Employ optimistic or pessimistic locking strategies based on business needs.

3. Operational Order Disruptions

Differences in server load can cause the execution order of cross-server operations to change, leading to unintended results.

– Solution: Introduce order management mechanisms to ensure operations execute in the intended sequence.

Xenon’s Innovative Solutions

Xenon is purpose-built to address the core challenges of multi-server collaboration, offering the following key technologies:

1. Virtual Enclosed Environments (AccessSpace)

Each operation is encapsulated within an independent virtual environment, ensuring seamless collaboration in operation timing and data security.

2. Built-In High-Efficiency Transaction System

Even under high-frequency data collisions, transactions remain stable, eliminating risks of data inconsistencies.

3. FunctionCall Model

Xenon transforms cross-server operations into a function call model resembling single-server operations:

– Eliminates reliance on traditional callback mechanisms, reducing code fragmentation.

– Avoids nested coroutine structures, lowering development and maintenance costs.

4. Simulated Single-Server Development Experience

Developers can focus on high-level business logic without delving into server synchronization details, enabling rapid development of large-scale distributed applications.

Why Choose Xenon

Xenon’s innovative technology has been successfully applied in multiple large-scale distributed projects, demonstrating its ability to address the following challenges:

– Rapid recovery from server failures, maintaining system stability.

– Efficient transaction mechanisms to resolve data conflicts and increase operation success rates.

– Accurate operation sequencing in multi-server collaborations, preventing business logic disruptions.

Whether for MMORPGs or high-concurrency applications, Xenon helps you build stable, scalable system architectures quickly.

Want to learn more or discuss how we can solve your technical challenges? Feel free to contact our technical team. We look forward to collaborating with you!

Client-Server Synchronization in Online Games (Part 2): Challenges and Solutions for Open-World Games

Unlike MOBA or party games, which are typically confined to limited spaces and short gameplay sessions, MMO (Massively Multiplayer Online) and SLG (Simulation Strategy) games operate in expansive, persistent worlds. These fundamental differences demand a completely different approach to networking technology.

In limited-space games, frame synchronization can be used, where the server only synchronizes commands, and clients rely on identical random seeds and logic to ensure consistency. However, open-world games require a state synchronization approach. In this architecture, the server must persistently store the entire game world’s data and synchronize it with the clients.

In this setup, every player action is processed and validated by the server, while the client primarily serves as a renderer for the results. However, this synchronization mechanism faces significant challenges when connecting players from different regions. Once client-server latency exceeds 150ms, players will experience noticeable lag, severely impacting gameplay fluidity.

Eventual Consistency: Minimizing Latency, Maximizing Fluidity
To address these latency issues, the industry employs a technique known as eventual consistency. Let’s use character movement as an example:

Client-Side Prediction and Command Synchronization
When a player initiates a movement command on the client side, the client uses the same movement logic as the server to predict the character’s path. Simultaneously, every movement change (direction or behavior) is sent to the server, including movement commands and starting coordinates.

Server-Side Validation and Path Correction
Upon receiving these commands, the server calculates the character’s position based on the provided instructions and coordinates. It then moves the server-side character to follow the client-side movement nodes.

Final Position Adjustment
When the player stops moving, the client sends the final position to the server. The server then moves the character to this final position, ensuring complete synchronization between the two.

Smoothing Movement and Collision Detection
To ensure natural and seamless visual feedback, the server does not rigidly follow the client’s exact movement path. Instead, it periodically adjusts direction to follow the shortest route to the predicted position.

For collision detection, the server uses the client’s movement path coordinates rather than the server-side path-following coordinates. This approach prevents desynchronization issues where a character might get incorrectly blocked, disrupting gameplay.

Additionally, when the discrepancy between the server and client positions becomes too large (within acceptable tolerance levels), the server can slightly increase the movement speed to compensate for occasional network latency spikes, preserving smooth gameplay.

Xenon Proxy Technology: A Global Synchronization Solution
By leveraging Xenon’s Proxy Edge Technology, developers can significantly reduce cross-regional latency, enabling players worldwide to experience stable and low-latency gameplay.

We understand that building synchronization systems for open-world games is a complex and highly technical challenge. However, with robust state synchronization, eventual consistency, and Xenon’s advanced architecture, your game can seamlessly connect players across the globe, providing an immersive, uninterrupted experience where every player can freely explore and interact under the same virtual sky.

Client-Server Synchronization in Online Games (Part 1)

Do you use frame synchronization technology

In online game development, network message transmission inherently suffers from unstable latency fluctuations. Synchronization issues are among the biggest challenges faced by new developers entering the online gaming domain, particularly because smooth gameplay directly impacts the player’s experience.

The Importance of Frame Synchronization

In highly latency-sensitive games such as MOBAs, Frame Synchronization is a common networking model. This model ensures fairness and smooth control by maintaining absolute consistency between the client and server timelines. Achieving this goal requires optimizations on both hardware and software levels:

  1. Hardware-Level Optimization:
  • Deploy dedicated servers in different regions to minimize network latency.
  • Leverage cloud platforms (e.g., GCP or AWS) to provide stable real-time connections using high-speed networks.
  • Through these methods, physical network latency can be controlled within the 30–400ms range.
  1. Software-Level Optimization:
  • Frame Synchronization technology divides game time into fixed time slices, synchronizing all player commands based on these intervals.
  • Considering the fastest human reaction time (approximately 150ms), we derive the following conclusions:
    1. Actions occurring within 150ms can be considered simultaneous.
    2. Latency between client and server within 150ms has minimal impact on user experience.
    3. Time slices adjusted by a speed factor of 0.85–1.15x are imperceptible to players.
Optimization Mechanisms for Frame Synchronization

Based on the above conclusions, we have designed the following optimization rules for Frame Synchronization:

  1. Baseline Latency Setting:
  • Before the game starts, measure network latency for all players in a room and set the highest latency as the baseline.
  • If latency exceeds 150ms, cap the baseline at 150ms and align all players to this reference value.
  1. Dynamic Time Adjustment:
  • During gameplay, adjust time synchronization dynamically based on the time difference between client and server within a 0.85–1.15x range.
  • When the time difference exceeds 150ms, the game accelerates; when it falls below 150ms, it decelerates.
  • Maintain the time offset within the 100–200ms range.
  1. Periodic Latency Calibration:
  • Regularly monitor network latency and recalibrate the baseline value if necessary.
  • Prevent incorrect initial calibration data from causing prolonged deviation, which could affect long-term gameplay stability.

Through the above mechanisms, we can effectively overcome network latency issues, ensuring a stable and smooth gaming experience globally. Developers can fine-tune these parameters based on their game type to create the most suitable solution.

As technology continues to evolve, Frame Synchronization still holds potential for further improvements, whether in reducing latency, enhancing performance, or adapting to various game scenarios. These areas remain key directions for our ongoing efforts.

How Edge Servers Are Transforming Game Development?


In traditional game development, clients typically connect directly to the logic servers responsible for handling game operations. While this architecture is simple, it presents several significant risks and challenges:

Connection anomalies are hard to detect: When connection issues arise during server switching, they can only be identified through timeout mechanisms, resulting in a poor user experience.
Servers are exposed to security risks: Logic servers, being directly accessible over the internet, are highly vulnerable to non-application-level attacks such as hacking or DDoS attacks. Beyond service disruption, these attacks can lead to the leakage of sensitive data.
High defense costs: Protecting against these threats often requires expensive hardware firewalls or traffic scrubbing services, greatly increasing operational expenses.
Inspired by live-streaming technologies, we introduced edge servers into game development to offer a groundbreaking solution:

Shielding logic servers: Edge servers are deployed in front of logic servers and handle client connections by focusing solely on packet forwarding and data caching. This design ensures that even if an attack occurs, sensitive data remains secure.
Simplified DDoS mitigation: Since edge servers do not process logic operations, they can endure resource-exhaustion attacks without causing service disruption. Combined with cloud-based load-balancing features, users can quickly reconnect and resume gameplay with minimal impact.
Optimal cost and performance balance: Leveraging cloud auto-scaling capabilities, additional endpoints can be rapidly deployed during attacks, ensuring cost-effective and efficient protection.
What’s even more exciting is how Xenon enhances the application of edge servers:

Dynamic data-driven CDN services: By transforming game object data into streams, edge servers equipped with Xenon can handle dynamic data efficiently.
Global deployment for low latency: Edge servers are distributed across data centers worldwide, delivering stable and low-latency gaming experiences for players everywhere.
Xenon is redefining game server architecture, empowering developers to build more secure and efficient gaming operations. If you’re looking for a reliable technology partner, we’re here to help!

Resource Management in High-Competition Scenarios: Challenges and Solutions in Popular Events and System Design

In high-competition scenarios, such as the rush to purchase tickets for popular concerts, it’s common to see hundreds of thousands or even millions of users flooding the system the moment sales open, often causing service disruptions or crashes. To tackle such challenges, it is essential to decompose the problem and address it systematically:

  1. Sudden Surge of Connections
    These events typically have a well-defined time frame, allowing for proactive preparation, such as scaling hardware or leveraging cloud platforms’ AutoScale mechanisms. Additionally, a robust load balancing strategy can effectively distribute traffic, reducing system bottlenecks and improving stability.
  2. Competition for Resources
    This challenge primarily arises from the high-pressure load on foundational services (especially database access) caused by a sudden influx of users. Scaling databases can be complex and costly, often becoming a performance bottleneck. To address this, we can adopt several measures:
  • Database Optimization: Refine table structures and indexing strategies.
  • Caching Technology: Utilize efficient caching solutions like Redis to minimize direct database access.
  • Innovative Solutions: For example, the Xenon system supports high-efficiency caching in memory and offers dynamic online scaling and shrinking capabilities. This can alleviate resource pressure during peak times and reduce costs during off-peak periods.
  1. Resource Contention and Locking
    In managing ticket or coupon resources, the key is to reduce the intensity of resource competition and improve operational efficiency. Specific measures include:
  • Resource Segmentation: Divide tickets by location, quantity, or even seat level to reduce the load on single points of contention.
  • Replica Mapping: Use replicated data for seat selection or viewing to minimize direct access to primary resources.
  • Optimized Locking Mechanisms: Avoid high-cost database transactions by adopting alternative mechanisms like fail-safe cancellation or scheduled monitoring to enhance stability under high-frequency operations.

Moreover, to address timeout issues caused by data conflicts, Xenon’s AccessSpace mechanism offers an innovative locking strategy. It retains the transactional characteristics and ensures stable efficiency even when merging ticket resource access with other operations within a single process.

Whether it’s managing ticket sales or other high-concurrency scenarios, the key to solving resource competition lies in optimizing technology and architecture. Striking a balance among instantaneous loads, resource utilization, and operational workflows is crucial to standing out in competitive environments. Hopefully, these insights can inspire enterprises or teams facing similar challenges!

Why Can’t Great New Game Mechanics Be Created?

Have you ever come up with an amazing gameplay idea, only to have it shot down by engineers? In my past development experience, simple cross-server operations like forming parties, trading, or chatting were easy to implement. However, when it came to more complex data-heavy interactions or frequent back-and-forth data exchanges, the challenges were overwhelming.

Hidden within the technical constraints is not just a performance bottleneck, but a cage for creativity. As a backend developer, I’ve faced this frustration countless times. While current technology can meet some demands, every time we aim for a breakthrough, we hit a seemingly insurmountable barrier.

Here’s an Example
In traditional systems, high-frequency real-time interactions are usually confined to a single server. These are often handled within a single machine using multi-threading, secured by atomic operations, Mutex, and other mechanisms. This approach ensures performance within manageable loads. But what happens when the hardware load is exceeded?

When users are distributed across multiple servers, interactions become complex and fraught with risks. Imagine a scenario where user data needs to be exchanged frequently across servers, requiring synchronization and security guarantees. Developers often spend an enormous amount of effort managing issues like deadlocks and data corruption during transmission. To prevent catastrophic backend crashes, engineers are left with no choice but to impose restrictions, effectively stifling creative ideas. Many concepts remain relegated to “ideal scenarios,” while game designers’ visions are tethered by the constraints of technology.

Are Backend Developers Destined to Settle? I Refuse.
These technical limitations subtly undermine a product’s competitiveness. Imagine a boundary-less world where players can seamlessly interact across servers, realizing truly large-scale multiplayer experiences. Why can’t this freedom be our standard? Current server clusters partially address data transmission issues, but they lack a mechanism to secure and optimize cross-server operations without compromising performance or safety.

My Solution: Breaking Through
I decided to take matters into my own hands. I developed a mechanism called AccessLock, which enables server clusters to lock data across servers while ensuring security and priority. This mechanism integrates Coroutine capabilities to avoid wasting resources during locking. However, when faced with traditional locking mechanisms’ unavoidable DataLock issues, I realized the root cause lay in the interdependence of multi-lock operations.

I took it a step further, drawing inspiration from transactional processing concepts to create AccessSpace, the core functionality of the Xenon system. AccessSpace is a technique that establishes independent processing spaces for each cross-server operation, limiting all actions to a single lock. This allows parallel processing at the AccessSpace level and fundamentally eliminates DataLock risks.

Unleashing Creativity
This breakthrough not only makes real-time cross-server interactions possible but also empowers game designers to unleash their creativity, building worlds of limitless potential. The constraints of the past are now history. We no longer need to compromise; we now have the power to create truly boundary-less digital experiences.

Breaking the Limits of Shard Technology: Building the Framework for Infinite Worlds

As a backend game developer, I instinctively analyze the technologies and strengths of every game I encounter. One recurring observation is that even “large-scale” game worlds are often constrained to limited player interactions within a single shard, with battles typically supporting only a few hundred participants.

Challenges like latency in cross-shard operations, inconsistent data management, and deadlocks under high concurrency have long deterred developers. Many abandon cross-shard features entirely due to the overwhelming complexity.

“Is the dream of a truly infinite world with massive real-time interactions beyond our reach?”

Driven by this question, I participated in an internal competition at IGG (I Got Games). I showcased a prototype that allowed tens of thousands of units to engage in real-time battles in an SLG environment. The technical capacity of the solution earned me first place.

Encouraged by this recognition, I embarked on a deeper exploration of a comprehensive solution, eventually developing an object-based distributed collaboration framework that addresses these longstanding bottlenecks.

Challenges of Traditional Shard Architectures
In traditional shard systems, each shard operates on separate threads or physical servers. This approach creates two critical challenges:

Latency and Performance Wastage in Cross-Shard Operations Blocking mechanisms cause severe delays, while non-blocking methods require significant effort to ensure data safety and consistency. The high-frequency interaction requirements of games exacerbate these issues.
Risk of Deadlocks Under High Concurrency Game logic operations often face resource contention, leading to deadlocks. Developers are forced to avoid real-time cross-shard interactions altogether.

These limitations result in isolated gameplay regions within shards, significantly restricting player interactions. Overcoming these challenges became the focus of my efforts.

The Solution: Object-Based Distributed Collaboration Framework
To tackle these challenges, I developed a new architecture that shifts the constraints from spatial limitations to task and object count. Here’s how the framework works:

  1. Distributed Object Management
    Game objects are distributed across multiple logic servers. Each object’s autonomous behavior and player interactions are transformed into discrete tasks. These tasks operate within a virtual space called AccessSpace, ensuring consistency and security across servers through linear data spaces.
  2. Parallel Task Execution
    Tasks are executed concurrently on logic servers, which dynamically scale using a technology we developed called BitChain. This scalability is akin to web services, preventing latency issues caused by insufficient computational resources.
  3. Low-Latency Data Synchronization
    Changes to object states during task execution are streamed to proxy servers and propagated to nearby clients. This process, similar to CDN-like data distribution, significantly reduces latency.
  4. Simplified Development Logic
    Developers can focus on building in-game objects by inheriting a basic Entity Interface to define object attributes. Using flexible combinations of Components (attributes), Controllers (behavior), and Functions (operations), they can create a wide range of game elements.

Performance Testing: Exploring the Possibility of Infinite Worlds
We conducted a large-scale performance test simulating 60,000 players interacting in real-time within the same space: Watch the Demo

Test Environment: GCP, 61 Servers

  • Logic Servers: 10 machines (2C4T each)
  • Proxy Servers: 25 machines (2C4T each)
  • Bot Simulators: 25 machines (4C8T each)
  • Database Server: 1 machine (4C8T, MongoDB)

Performance Highlights: With processes running 3 network I/O threads and 4 logic services, a single logic server achieved 180,000 RPCs per second. The number of supported players is no longer bound by spatial constraints but instead by the number of objects affected by tasks.

For example, a task impacting 180 objects would take just 1 millisecond to execute on a single logic server.

Unlocking New Possibilities for Game Design
This technology allows developers to design gameplay without avoiding cross-shard interactions, making infinite-world games achievable. Although large-scale abilities like AOE still require task splitting, I am confident that ongoing optimizations will further minimize such constraints.

Reflection: A Decade of Innovation


The journey from concept to prototype spanned ten years. Many times, I considered keeping the technology to myself, thinking, “It’s too complicated to make it usable for others.” But my desire to advance backend technology pushed me forward. With support from Taiwan’s SBIR grant, I finally consolidated scattered components into a unified prototype.

I hope this technology inspires game developers to rethink the possibilities of shardless, infinite virtual worlds. I’m eager to collaborate with more developers to push the boundaries of what’s possible.

Let’s build the future of infinite worlds together!