前言

最近一年我在 Google/Anthropic/OpenAI 三家烧了超过 1 万美金的 token 账单。所以本文内容基于 opus4.6、codex-5.3-xhigh、gemini3-pro 等最强模型不限量使用所表现出来的编码能力进行评价。

现象:Agent 的信任危机

就好像保健品销售拿着他的《大数据量子 AI 生物磁场治疗仪》,忽悠我说这台原价 20 万、现在活动价 8 万 8 的仪器,可以彻底根治我的颈椎病腰椎病高血压糖尿病,还能逆转我的动脉血管粥样硬化、冠心病、阳痿早泄等等

Agent 编程现在就是这么个状态。

Agent 给我一堆 emoji 庆祝刚才生成的七八万行屎山通过了全部测试用例,告诉我可以替换生产环境了。你信吗?

假设你是一位项目 leader,你最靠谱的组员同事,交给他的开发任务 80% 可以在预期时间内高质量交付。这位同事拿头给你保证下周就可以上线,那么你大概率能信任他最迟下下周也搞定了。但是 agent 给你保证现在质量和完成度可以上线生产了,你信吗?

此时此刻,无数知识星球、自媒体、AI 导师教父们正在到处收割韭菜的学费。大意基本上都是教你如何 prompt(tool/skill 换汤不换药),然后让你多开 agent 并行干活。

一个真实案例

Agent 的盲目自信不仅会误导使用者,也会误导 agent 自己。

我曾给 agent 这个任务:为当前 Kotlin 项目集成 GCP Transcoding 服务。我给了 agent 该产品的页面和文档作为参考,让它开始 plan。Agent 做出了如下计划:

  1. 通读文档后,发现该服务仅提供了 Java SDK,而当前项目使用的是 JVM 上的其他语言,并非原生支持
  2. 根据 RESTful 文档指示,结合文档定义字段,使用 ktor-client 进行手动接入
  3. 编写代码并执行测试

你发现这份计划中存在的问题了吗?

事实上,如果你曾经「古法手工编程」做过此类工作,你会发现手动实现 RESTful 远没有想象中那么简单。哪怕仅实现 Transcoding 服务的基础能力,也涉及到 5-10 个 endpoint 调用。每个 endpoint 的输入输出参数又有几十甚至上百个字段嵌套定义,agent 在应对这类长上下文任务时会频繁犯错。

而如果 agent 选择对 Java SDK(Google 也是从 protobuf 生成出来的)进行简单包装隔离,大概半天到一天就可以让这个功能稳定上线。

若是让 agent 按照 RESTful 文档手动实现,agent 可能会陷入 debug 泥潭——因为当 AI 幻觉导致写错了可选字段的字段名(大小写、驼峰、下划线),程序不会立即报错。你需要多久才能发现它实现错了?等上线生产后客户投诉吗?

为什么我们无法信任 agent? 经过一年的实践,我认为问题的根源在于:我们缺乏有效的验证手段。

原因:验证手段的全面失效

Code Review 失效

常见观点:某种意义上来说 AI 并没有取代程序员,只不过是一个新的高级工具罢了。你作为生产代码的人,还是得弄明白要干啥,合入的代码就得弄明白。

但我认为,这个在实际项目里很难做到。

像我们之前内部 review 的时候,大部分时候 review 的是 code style,作者讲一下设计思路,我们也就是大概一听就过了。以前这套方法是有效的:

  • 代码风格差的 PR,设计思路也一团糟,性能也差,也没什么可扩展性
  • 代码风格好的 PR,设计思路都挺清晰,性能考虑也周到,就算有性能瓶颈也容易改,最后扩展性也不错

但是这个相关性在 agent 编码时代不存在,甚至相反。

Agent 一分钟就能生成出来注释齐全、风格优秀的——屎山代码。反正我肉眼看过去的时候,经常会被这第一层假象蒙蔽,放松警惕。主要是这个屎山有点难在 review 阶段发现,经常是上线后出了问题,回头细查的时候才发现是「巧克力味的屎」。

你信我,opus、codex-xhigh 这些你们舍不得用的模型,我开 thinking+max 模式站起来蹬,一样有这个问题。

测试失效

更不用说测试了。现在的 test cases 也是 AI vibe 出来的,agent 又当裁判又当运动员,它说什么就是什么。蒙我坑我也不是一次两次了。写了几千行 getter/setter 的 test case,最后测试全绿告诉我可以上生产环境发布了。

就像前面 GCP Transcoding 的例子,agent 写错了可选字段的字段名,测试照样能过,因为测试也是它自己写的,错的一致就是「对」的。

与传统行业的对比

说到这里,有人可能会问:其他行业被机械、智能赋能后,难道就没有这个问题吗?

让我用 CNC 机床打个比方:

CNC 机床精度比我高,但机床产出工件后,我们可以对工件进行客观的物理测量——用卡尺量一下,公差是不是在 ±0.01mm 以内,一目了然。即便我没有手搓出这个精度的能力,但我依然有评价 CNC 机床和工件质量的能力。

这就是传统制造业被机械赋能后的状态:机器精度高,质量统一且稳定,而人依然能评价机器的产出。

那么软件开发行业被 Agent 变革后,理想状态应该是什么样的?Agent 交付的代码确实覆盖了需求,具备基本的安全防护,且更容易长期维护(哪怕仅考虑 agent 自己维护,不考虑对人类的可读性),性能更高,资源占用更少。

但程序不仅需要完成眼下需求文档中的功能,还需要考虑到基本的安全防护。一个功能完成但安全漏洞百出的项目代码,同样是不合格的。

而目前我们还无法评价 agent 是否达到了这个状态。单就「功能实现」这一基础要求,agent 还不能脱离人的引导和测试验证——更别提安全性、可维护性、性能这些更高阶的指标了。

而且程序不是物理工件,不能用物理手段去测量。你没法拿卡尺量一下这段代码的「质量公差」是多少。

所以没法用衡量物理工件的标准去衡量程序,反而应该像衡量 CNC 机床本身一样——而一次生产(all tests passed)远不能衡量机床质量,更不能衡量程序质量。

CNC 机床加工塑料、铝合金小件精度高,不代表加工钛合金、不锈钢精度也能达标。后者更考验整体刚性,以及工件质量大了以后热胀冷缩对程序进刀补偿的要求。

同理,vibe coding 出来的代码,本地点两下鼠标测试通过了,上线也是极大概率会直接炸掉。

传统行业:机器精度高、质量稳定,人能评价。软件行业:Agent 产出快、覆盖广,但人还没法可靠地评价。这就是问题所在。

方案:让编译器替你把关

既然人工评价(Code Review)和自动测试都靠不住,我们需要另一种评价手段——一种不依赖 agent 自己判断的、客观可验证的评价手段。

我的观点和主流 AI 编码观点相反:

Agent 编程时代,更需要强类型,更需要严格可验证的语言,而不是放任 agent 去写 python/js/go,还有 anyscript。

为什么?

AI 堆屎山这么快,别说生成个几万行了,就是生成超过 100 行我都已经懒得逐行去细读了。但是读类型签名、pre/post-condition 明显要快于通读逻辑代码。而这些东西只有 Rust/Scala/Haskell 甚至 formal method 能提供。

我在 agent 编码前就一直用这种风格写自己的代码,主要是代码量大了以后,编译器检查比我肉眼检查更靠谱。现在 agent 编码流行起来了,我发现让 agent 遵循我的这个要求,更能控制产出代码质量——当然也只能说一定程度上,起码比什么都不做好。

回到 GCP Transcoding 的例子:如果 agent 用的是强类型语言,字段名写错了至少还能在编译期被类型系统拦住一部分。但 RESTful + 弱类型的组合,错了就是悄无声息地错,等你发现的时候已经晚了。

实践效果

y1s1,该夸的还是要夸。现在的最新最强模型,过编译问题不大了,除非你比我还执着于类型体操。

放手让 agent 做工程还是拉垮的一批,但过编译已经问题不大了。Pure-FP Scala、tagless final,opus 4.5 和 codex-xhigh 遵循得挺不错了,过编译也是自动的。函数式类型体操的编译错误基本上都是几十上百行的类型天书,agent 读懂并修复这些编译错误,在我写这篇文章时已经不再是困难。

局限性

当然,这个方案也有局限。

实际上现在的 formal method 工具链和生态还是很贫瘠,基本上只支持一门语言很有限很小的一个子集。有些工程上常用的语法/模式在 FM 那边都是 unsound,或者尚未证明。更不用说动不动就陷入死循环/无解证明了——稍不注意,z3 求解器要在比宇宙空间还大的可能性里搜索,到宇宙毁灭那一天都证明不出来。

强类型能解决一部分问题,但不是全部。

更深的困境:Plan 与 Execute

即使有了强类型作为评价手段,还有一个更深层的问题:agent 对计划的理解和执行。

GCP Transcoding 的例子其实已经暴露了这个问题:agent 选择手动实现 RESTful 而不是包装 Java SDK,这不是代码写错了,而是路线选错了。编译器能告诉你代码有没有语法错误,但没法告诉你该不该走这条路。

再举个更极端的例子:给 agent 一个复杂任务,研制一款火箭发动机。Plan 决定了做全流量分级燃烧循环,路线选择了共轴方案。

Agent 不遵循的话: 可能就偏离到抽气循环也说不定。编译器能告诉你代码有没有语法错误,但没法告诉你这是不是你要的火箭发动机。

Agent 遵循太好: 真的做出来共轴方案,那可能上线后会碰到更大的问题——共轴以后动密封系统做不好,氧化剂和燃料随着涡轮轴互相泄漏,俩预燃室要炸一个。编译器能保证类型正确,但没法保证设计合理。

现在的 plan/edit mode 切换也只是现阶段的权宜之计、无奈之举。这个问题比「评价手段缺失」更难解决,因为它涉及到对需求和设计的理解,而不仅仅是代码质量。

初见即巅峰

Agent 编程有一个显著的特点:初见即巅峰

让 agent 开始一个全新的 CRUD 项目,或者一个 React 管理系统页面,agent 第一次的表现着实让所有人都大吃一惊——干净利落,结构清晰,甚至还贴心地加上了注释和错误处理。

但随着项目维护越来越久,那些「不可明说的」、没有被文档记录的、约定俗成的隐藏上下文越来越长。哪个字段其实已经废弃了但没删、哪个 API 有个历史遗留的 quirk、哪个模块之间有个微妙的依赖关系——这些东西,老员工心里都有数,但从来没人写下来。

而 agent 无法处理无限长的上下文,只能通过压缩、总结来选择性遗忘细节。可能被丢弃的是几次失败尝试的经验,也可能被丢弃的是关键数据结构的偏移量、寄存器地址、枚举定义。

每次新开一个 session 的时候,开发者不得不面对一个几乎全新的「员工」——它似乎继承了压缩后的上下文(claude.md / agents.md),但细节完全不知。你得重新跟它解释一遍:「不是,这个接口虽然文档上写的是这样,但实际上我们从来不传这个参数……」

对于 CRUD、Spring、React 这类重复度高的任务,这似乎不是什么痛点——反正每次都差不多,忘了就忘了。

但对于嵌入式系统开发,任何被遗忘的细节都可能被 agent 天马行空的幻觉填充。寄存器地址错了?中断优先级配错了?DMA 通道冲突了?轻则系统崩溃,重则永久烧坏硬件。这不是「改个 bug 重新部署」能解决的问题。

Agent 时代,CS 基础还要学吗?

既然评价 agent 产出是核心问题,那开发者的基础知识就必然还是要学的。不然你拿什么去评价 agent 生成的代码、模块、架构设计质量到底如何?没有评价能力的开发者,和保健品店里待宰的老头老太没有区别。

那么该如何学习呢?

打开 LeetCode,题目还没读完呢,Copilot 已经把答案补全出来了。点一下 Submit & Run,前 1%。就这?

我的意见是:既然有 AI 了,当然不能局限于过去的难度,得上强度,上到 AI 做不出来的程度

放心,该学的不会落下。上了强度以后 AI 幻觉越来越多,该补的课全都得补上。期间 AI 还会给你帮不少倒忙——但这恰恰是学习的机会。

比如你要实现 Red-Black Tree、B-Tree、AVL Tree,那就上点强度:给算法加上形式化验证,再把泛型支持也加上。放心,当下最强模型也写不出来。

其实幻觉反而会帮助你学习——因为幻觉里包含了常见的误解,你去验证和纠正幻觉的过程,本身就加深了学习效果。

结语

AI 框架、模型、工具、方法论层出不穷,日新月异。但说到底,这些都是在给模型做加法、打补丁。

人类完成一个完整工作流的时候,不需要把自己拆解成多个「子 agent」去协作——因为人类是真的有记忆能力,且会学习的。做的时间越长,成长越多,越熟练。项目里那些隐藏的上下文、踩过的坑、约定俗成的规矩,都会沉淀成经验。

而 agent 则相反。当前上下文越长,智力下降越明显。即便细节仍然在上下文内,agent 也开始频繁地忽略这些细节,自顾自地幻觉出一些「看起来合理」的东西来。

核心问题始终没变:我们依然缺乏可靠的手段来评价 agent 的产出。强类型是目前我找到的最实用的部分解,但也只是部分解。

一天不学,错过很多。一年不学,好像也没错过什么。

框架工具更新迭代,爆款层出不穷,但其炒作因素远大于实际能力和价值。而 CS 基础知识才是久经时间考验的硬通货。与其追新框架新工具,不如把精力放在强化自己「评价 Agent 产出」的能力上——这才是 agent 时代真正稀缺的东西。

Many data systems use polling refresh to display lists, which can cause a delay in updating content status and cannot immediately provide feedback to users on the page. Shortening the refresh time interval on the client side can lead to an excessive load on the server, which should be avoided.

To solve this problem, this article proposes an event subscription mechanism. This mechanism provides real-time updates to the client, eliminating the need for polling refresh and improving the user experience.

Terminologies and Context

This article introduces the following concepts:

  • Hub: An event aggregation center that receives events from producers and sends them to subscribers.
  • Buffer: An event buffer that caches events from producers and waits for the Hub to dispatch them to subscribers.
  • Filter: An event filter that only sends events meeting specified conditions to subscribers.
  • Broadcast: An event broadcaster that broadcasts the producer's events to all subscribers.
  • Observer: An event observer that allows subscribers to receive events through observers.

The document discusses some common concepts such as:

  • Pub-Sub pattern: It is a messaging pattern where the sender (publisher) does not send messages directly to specific recipients (subscribers). Instead, published messages are divided into different categories without needing to know which subscribers (if any) exist. Similarly, subscribers can express interest in one or more categories and receive all messages related to that category, without the publisher needing to know which subscribers (if any) exist.
  • Filter:
    • Topic-based content filtering mode is based on topic filtering events. Producers publish events to one or more topics, and subscribers can subscribe to one or more topics. Only events that match the subscribed topics will be sent to subscribers. However, when a terminal client subscribes directly, this method has too broad a subscription range and is not suitable for a common hierarchical structure.
    • Content-based content filtering mode is based on message content filtering events. Producers publish events to one or more topics, and subscribers can use filters to subscribe to one or more topics. Only events that match the subscribed topics will be sent to subscribers. This method is suitable for a common hierarchical structure.

Functional Requirements

  • Client users can subscribe to events through gRPC Stream, WebSocket, or ServerSentEvent.
  • Whenever a record's status changes (e.g. when the record is updated by an automation task) or when other collaborators operate on the same record simultaneously, an event will be triggered and pushed to the message center.
  • Events will be filtered using content filtering mode, ensuring that only events that meet the specified conditions are sent to subscribers.

Architecture

flowchart TD
  Hub([Hub])
  Buffer0[\"Buffer drop oldest"/]
  Buffer1[\"Buffer1 drop oldest"/]
  Buffer2[\"Buffer2 drop oldest"/]
  Buffer3[\"Buffer3 drop oldest"/]
  Filter1[\"File(Record = 111)"/]
  Filter2[\"Workflow(Project = 222)"/]
  Filter3[\"File(Project = 333)"/]
  Broadcast((Broadcast))
  Client1(Client1)
  Client2(Client2)
  Client3(Client3)
  Hub --> Buffer0
  subgraph Server
    Buffer0 --> Broadcast
    Broadcast --> Filter1 --> Buffer1 --> Observer1
    Broadcast --> Filter2 --> Buffer2 --> Observer2
    Broadcast --> Filter3 --> Buffer3 --> Observer3
  end
  subgraph Clients
    Observer1 -.-> Client1
    Observer2 -.-> Client2
    Observer3 -.-> Client3
  end

High-Level Overview

flowchart TD
  Pipe#a[[...Pipe...]]
  Pipe#b[[...Pipe...]]
  subgraph Hub
  direction LR
  Event1((Event1))
	Event2((Event2))
  Event3((Event3))
  Event4((Event4))
  Event5((Event5))
  Event6((Event6))
  Event7((Event7))
  Event8((Event8))
  Event1 -.-> Event2 -.-> Event3 -.-> Event4 -.-> Event5 -.-> Event6 -.-> Event7 -.-> Event8
  end
	Pipe#a -.-> Event1
  Event8 -.-> Pipe#b
  subgraph Client1
  direction LR
  C1Subscribe((Start))
  C1Cancel((End))
  Event2 -.-> C1Listen2
  Event3 -.-> C1Listen3
  Event4 -.-> C1Listen4
  Event5 -.-> C1Listen5
  C1Subscribe -.-> C1Listen2 -.-> C1Listen3 -.-> C1Listen4 -.-> C1Listen5 -.-> C1Cancel
  end
  subgraph Client2
  direction LR
  C2Subscribe((Start))
  C2Cancel((End))
  Lag(("❌"))
  C2Subscribe -.-> C2Listen1 -- "Poor Network" ---> Lag --"Packet loss"---> C2Listen5 -.-> C2Listen6 -.-> C2Listen7 -.-> C2Listen8 -.-> C2Cancel
  Event1 -.-> C2Listen1
  Event5 -.-> C2Listen5
  Event6 -.-> C2Listen6
  Event7 -.-> C2Listen7
  Event8 -.-> C2Listen8
  end

Clients should follow these steps:

  • Upon entering the page, subscribe as necessary.
  • After listening to the change event, debounce and re-request the list interface, and then render it.
  • When leaving the page, cancel the subscription.

Servers should follow these steps:

  • Subscribe to push events based on the client's filter.
  • When the client's backlog message becomes too heavy, delete the oldest message from the buffer.
  • When the client cancels the subscription, the server should also cancel the broadcast to the client.

Application / Component Level Design (LLD)

flowchart LR
  Server([Server])
  Client([Client: Web...])
  MQ[Kafka or other]
  Broadcast((Broadcast))
  subgraph ExternalHub
    direction LR
    Receiver --> MQ --> Sender
  end
  subgraph InMemoryHub
    direction LR
    Emit -.-> OnEach
  end
  Server -.-> Emit
  Sender --> Broadcast
  OnEach -.-> Broadcast
  Broadcast -.-> gRPC
  Broadcast -.-> gRPC
  Broadcast -.-> gRPC
  Server --  "if horizon scale is needed" --> Receiver
  gRPC --Stream--> Client

For a single-node server, a simple Hub can be implemented using an in-memory queue.

For multi-node servers, an external Hub implementation such as Kafka, MQ, or Knative eventing should be considered. The broadcasting logic is no different from that of a single machine.

Failure Modes

Fast Producer-Slow Consumer

This is a common scenario that requires special attention. The publish-subscribe mechanism for terminal clients cannot always expect clients to consume messages in real time. However, message continuity must be maximally guaranteed. Clients may access our products in an uncontrollable network environment, such as over 4G or poor Wi-Fi. Thus, the server message queue cannot become too backlogged. When a client's consumption rate cannot keep up with the server's production speed, this article recommends using a bounded Buffer with the OverflowStrategy.DropOldest strategy. This ensures that subscriptions between consumers are isolated, avoiding too many unpushed messages on the server (which could lead to potential memory leak risks).

Alternative Design

VMware has publish a very similar design in 2013, but use Go RingChannel

Summary

This document proposes an event subscription mechanism to address the delay in updating content status caused by polling refresh. Clients can subscribe to events through any long connection protocol, and events will be filtered based on specified conditions. To avoid having too many unpushed messages on the server, a bounded buffer with the OverflowStrategy.DropOldest strategy is used.

Implementing this in Reactive Streams is straightforward, but you can choose your preferred technology to do so.

Overview

In the previous post, we discussed how to implement a file tree in PostgreSQL using ltree. Now, let's talk about how to integrate version control management for the file tree.

Version control is a process for managing changes made to a file tree over time. This allows for the tracking of its history and the ability to revert to previous versions, making it an essential tool for file management.

With version control, users have access to the most up-to-date version of a file, and changes are tracked and documented in a systematic manner. This ensures that there is a clear record of what has been done, making it much easier to manage files and their versions.

Terminologies and Context

One flawed implementation involves storing all file metadata for every commit, including files that have not changed but are recorded as NO_CHANGE. However, this approach has a significant problem.

The problem with the simple and naive implementation of storing all file metadata for every commit is that it leads to significant write amplification, as even files that have not changed are recorded as NO_CHANGE. One way to address this is to avoid storing NO_CHANGE transformations when creating new versions, which can significantly reduce the write amplification.

This is good for querying, but bad for writing. When we need to fetch a specific version, the PostgreSQL engine only needs to scan the index with the condition file.version = ?. This is a very cheap cost in modern database systems. However, when a new version needs to be created, the engine must write \(N\) rows of records into the log table (where \(N\) is the number of current files). This will cause a write peak in the database and is unacceptable.

In theory, all we need to do is write the changed file. If we can find a way to fetch an arbitrary version of the file tree in \(O(log(n))\) time, we can reduce unnecessary write amplification.

Non Functional Requirements

Scalability

Consider the worst-case scenario: a file tree with more than 1,000 files that is committed to more than 10,000 times. The scariest possibility is that every commit changes all files, causing a decrease in write performance compared to the efficient implementation. Storing more than 10 million rows in a single table can make it difficult to separate them into partitioned tables.

Suppose \(N\) is the number of files, and \(M\) is the number of commits. We need to ensure that the time complexity of fetching a snapshot of an arbitrary version is less than \(O(N\cdot log(M))\). This is theoretically possible.

Latency

In the worst case, the query can still respond in less than 100ms.

Architecture

Database Design

Illustration of data structures.

Illustration of data structures.

Tech Details

Subqueries appearing in FROM can be preceded by the key word LATERAL. This allows them to reference columns provided by preceding FROM items. (Without LATERAL, each subquery is evaluated independently and so cannot cross-reference any other FROM item.) — https://www.postgresql.org/docs/current/queries-table-expressions.html#QUERIES-LATERAL

PostgreSQL has a keyword called LATERAL. This keyword can be used in a join table to enable the use of an outside table in a WHERE condition. By doing so, we can directly tell the query optimizer how to use the index. Since data in a combined index is stored in an ordered tree, finding the maximum value or any arbitrarily value has a time complexity of \(O(log(n))\).

Finally, we obtain a time complexity of \(O(N \cdot log(M))\).

Performance

Result: Fetching an arbitrary version will be done in tens of milliseconds.

1
2
3
4
5
6
7
8
9
10
11
12
13
explain analyse
select f.record_id, f.filename, latest.revision_id
from files f
inner join lateral (
select *
from file_logs fl
where f.filename = fl.filename
and f.record_id = fl.record_id
-- and revision_id < 20000
order by revision_id desc
limit 1
) as latest
on f.record_id = 'f5c2049f-5a32-44f5-b0cc-b7e0531bf706';
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
Nested Loop  (cost=0.86..979.71 rows=1445 width=50) (actual time=0.040..18.297 rows=1445 loops=1)
-> Index Only Scan using files_pkey on files f (cost=0.29..89.58 rows=1445 width=46) (actual time=0.019..0.174 rows=1445 loops=1)
Index Cond: (record_id = 'f5c2049f-5a32-44f5-b0cc-b7e0531bf706'::uuid)
Heap Fetches: 0
-> Memoize (cost=0.57..0.65 rows=1 width=4) (actual time=0.012..0.012 rows=1 loops=1445)
" Cache Key: f.filename, f.record_id"
Cache Mode: binary
Hits: 0 Misses: 1445 Evictions: 0 Overflows: 0 Memory Usage: 221kB
-> Subquery Scan on latest (cost=0.56..0.64 rows=1 width=4) (actual time=0.012..0.012 rows=1 loops=1445)
-> Limit (cost=0.56..0.63 rows=1 width=852) (actual time=0.012..0.012 rows=1 loops=1445)
-> Index Only Scan Backward using file_logs_pk on file_logs fl (cost=0.56..11.72 rows=158 width=852) (actual time=0.011..0.011 rows=1 loops=1445)
Index Cond: ((record_id = f.record_id) AND (filename = (f.filename)::text))
Heap Fetches: 0
Planning Time: 0.117 ms
Execution Time: 18.384 ms

Test Datasets

This dataset simulates the worst-case scenario of a table with 14.6 million rows. Specifically, it contains 14.45 million rows representing a situation in which 1,400 files are changed 10,000 times.

1
2
3
4
5
-- cnt: 14605858
select count(0) from file_logs;
-- cnt: 14451538
select count(0) from file_logs where record_id = 'f5c2049f-5a32-44f5-b0cc-b7e0531bf706';

Schema

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
create table public.file_logs
(
file_key ltree not null,
revision_id integer not null,
record_id uuid not null,
filename varchar(2048) not null,
create_time timestamp,
update_time timestamp,
delete_time timestamp,
blob_sha256 char(64),
constraint file_logs_pk
primary key (record_id, filename, revision_id)
);

alter table public.file_logs
owner to postgres;

create table public.files
(
record_id uuid not null,
filename varchar(2048) not null,
create_at timestamp not null,
primary key (record_id, filename)
);

alter table public.files
owner to postgres;

Further Improvements

We can implement this using an intuitive approach in a graph database.

File tree version in graph database

Background

A file tree is a hierarchical structure used to organize files and directories on a computer. It allows users to easily navigate and access their files and folders, and is commonly used in operating systems and file management software.

But implementing file trees in traditional RDBMS like MySQL can be a challenge due to the lack of support for hierarchical data structures. However, there are workarounds such as using nested sets or materialized path approaches. Alternatively, you could consider using NoSQL databases like MongoDB or document-oriented databases like Couchbase, which have built-in support for hierarchical data structures.

It is possible to implement a file tree in PostgreSQL using the ltree datatype provided by PostgreSQL. This datatype can help us build the hierarchy within the database.

TL;DR

Pros

  • Excellent performance!
  • No migration is needed for this, as no new columns will be added. Only a new expression index needs to be created.

Cons

  • Need additional mechanism to create virtual folder entities.(only if you need to show the folder level)
  • There are limitations on the file/folder name length.(especially in non-ASCII characters)

Limitation

The maximum length for a file or directory name is limited, and in the worst case scenario where non-ASCII characters(Chinese) and alphabets are interlaced, it can not be longer than 33 characters. Even if all the characters are Chinese, the name can not exceed 62 characters in length.

Based on PostgreSQL documentation, the label path can not exceed 65535 labels. However, in most cases, this limit should be sufficient and it is unlikely that you would need to nest directories to such a deep level.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
select escape_filename_for_ltree(
'一0二0三0四0五0六0七0八0九0十0' ||
'一0二0三0四0五0六0七0八0九0十0' ||
'一0二0三0四0五0六0七0八0九0十0' ||
'一0二0'
); -- worst case len 34
select escape_filename_for_ltree(
'一二三四五六七八九十' ||
'一二三四五六七八九十' ||
'一二三四五六七八九十' ||
'一二三四五六七八九十' ||
'一二三四五六七八九十' ||
'一二三四五六七八九十' ||
'一二三'
); -- Chinese case len 63
1
[42622] ERROR: label string is too long Detail: Label length is 259, must be at most 255, at character 260. Where: PL/pgSQL function escape_filename_for_ltree(text) line 5 at SQL statement

How to use

Build expression index

1
CREATE INDEX idx_file_tree_filename ON files using gist (escape_filename_for_ltree(filename));

Example Query

1
2
3
4
5
explain analyse
select filename
from files
where escape_filename_for_ltree(filename) ~ 'ow.*{1}'
and record_id = '1666bad1-202c-496e-bb0e-9664ce3febcb';

Query Result

1
2
3
4
5
ow/ros_00000000_2022-03-02-12-55-19_330.bag
ow/ros_00011426_2022-08-15-19-24-11_0.bag
ow/ros_00019378_2022-08-12-18-40-06_0.bag
ow/ros_00011426_2022-08-15-19-24-11_0.bag
ow/ros_00011426_2022-08-15-19-24-11_0.bag.coscene-reserved-index

Query Explain

1
2
3
4
5
6
7
8
9
10
11
Bitmap Heap Scan on files  (cost=32.12..36.38 rows=1 width=28) (actual time=0.341..0.355 rows=8 loops=1)
Recheck Cond: ((record_id = '1666bad1-202c-496e-bb0e-9664ce3febcb'::uuid) AND (escape_filename_for_ltree((filename)::text) <@ 'ow'::ltree))
Heap Blocks: exact=3
-> BitmapAnd (cost=32.12..32.12 rows=1 width=0) (actual time=0.323..0.324 rows=0 loops=1)
-> Bitmap Index Scan on idx_file_tree_record_id (cost=0.00..4.99 rows=93 width=0) (actual time=0.051..0.051 rows=100 loops=1)
Index Cond: (record_id = '1666bad1-202c-496e-bb0e-9664ce3febcb'::uuid)
-> Bitmap Index Scan on idx_file_tree_filename (cost=0.00..26.88 rows=347 width=0) (actual time=0.253..0.253 rows=52 loops=1)
Index Cond: (escape_filename_for_ltree((filename)::text) <@ 'ow'::ltree)
Planning Time: 0.910 ms
Execution Time: 0.599 ms

Explaination

PostgreSQL's LTREE data type allows you to use a sequence of alphanumeric characters and underscores on the label, with a maximum length of 256 characters. So, we get a special character underscore that can be used as a notation to build our escape rules within the label.

Slashes(/) will be replaced with dots(.). I think it does not require further explanation.

Initially, I attempted to encode all non-alphabetic characters into their Unicode hex format. However, after receiving advice from other guys, I discovered that using base64 encoding can be more efficient in terms of information entropy. Ultimately, I decided to use base62 encoding instead to ensure that no illegal characters are produced and to achieve the maximum possible information entropy.

This is the final representation of the physical data that will be stored in the index of PostgreSQL.

1
2
3
4
select escape_filename_for_ltree('root/folder1/机器人仿真gazebo11-noetic集成ROS1/state.log');
-- result:
-- root.folder1._1hOBTVt5n7EhFWzIbUcjT_gazebo11_j_noetic_1Aw3qhY48_ROS1.state_k_log

Further

If you want to store an isolated file tree in the same table, one thing you need to do is prepend the isolation key as the first label of the ltree. For example:

1
select escape_filename_for_ltree('<put_user_id_in_there>' || '/' || '<path_to_file>');

By doing this, you will get the best query performance.

Summary

This document explains how to implement a file tree in PostgreSQL using the ltree datatype. The ltree datatype can help build the hierarchy within the database, and an expression index needs to be created. There are some limitations on the file/folder name length, but the performance is excellent. The document also provides PostgreSQL functions for escaping and encoding file/folder names.

Appendix: PostgreSQL Functions

Entry function (immutable is required)

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
CREATE OR REPLACE FUNCTION escape_filename_for_ltree(filename TEXT)
RETURNS ltree AS
$$
DECLARE
escaped_path ltree;
BEGIN
select string_agg(escape_part(part), '.')
into escaped_path
from (select regexp_split_to_table as part
from regexp_split_to_table(filename, '/')) as parts;

return escaped_path;

END;
$$ LANGUAGE plpgsql IMMUTABLE;

Util: Escape every part (folder or file)

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
create or replace function escape_part(part text) returns text as
$$
declare
escaped_part text;
begin
select string_agg(escaped, '')
into escaped_part
from (select case substring(sep, 1, 1) ~ '[0-9a-zA-Z]'
when true then sep
else '_' || base62_encode(sep) || '_'
end as escaped
from (select split_string_by_alpha as sep
from split_string_by_alpha(part)) as split) as escape;
RETURN escaped_part;
end;
$$ language plpgsql immutable

Util: Split a string into groups

Each group contains only alphabetic characters or non-alphabetic characters.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
CREATE OR REPLACE FUNCTION split_string_by_alpha(input_str TEXT) RETURNS SETOF TEXT AS
$$
DECLARE
split_str TEXT;
BEGIN
IF input_str IS NULL OR input_str = '' THEN
RETURN;
END IF;

WHILE input_str != ''
LOOP
split_str := substring(input_str from '[^0-9a-zA-Z]+|[0-9a-zA-Z]+');
IF split_str != '' THEN
RETURN NEXT split_str;
END IF;
input_str := substring(input_str from length(split_str) + 1);
END LOOP;

RETURN;
END;
$$ LANGUAGE plpgsql

Util: base62 encode function

By using the base62_encode function, we can create a string that meets the requirements of LTREE and achieves maximum information entropy.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
CREATE OR REPLACE FUNCTION base62_encode(data TEXT) RETURNS TEXT AS $$
DECLARE
ALPHABET CHAR(62)[] := ARRAY[
'0','1','2','3','4','5','6','7','8','9',
'A','B','C','D','E','F','G','H','I','J',
'K','L','M','N','O','P','Q','R','S','T',
'U','V','W','X','Y','Z','a','b','c','d',
'e','f','g','h','i','j','k','l','m','n',
'o','p','q','r','s','t','u','v','w','x',
'y','z'
];
BASE BIGINT := 62;
result TEXT := '';
val numeric := 0;
bytes bytea := data::bytea;
len INT := length(data::bytea);
BEGIN
FOR i IN 0..(len - 1) LOOP
val := (val * 256) + get_byte(bytes, i);
END LOOP;

WHILE val > 0 LOOP
result := ALPHABET[val % BASE + 1] || result;
val := floor(val / BASE);
END LOOP;

RETURN result;
END;
$$ LANGUAGE plpgsql;

起因

这个月(2022年8月)于Rust二群与某人辩论,因为某人坚持认为 Rust 的所有权 Ownership 机制仅仅是等同于垃圾回收 Garbage Collection ,而我认为 Ownership 还解决了另一个困扰无数码农的问题:资源安全

定义

常见资源可以分为三大类:

  • 文件
    • Socket
      • TCP
      • HTTP
      • JDBC
    • 文件系统
    • 本地
    • 远程(Redis,RDBMS)
  • 逻辑资源
    • Stream(背后可能是文件)
    • 日志区块或长链接起止符
    • 临时文件删除

而资源管理总共分三步,分别是:

  1. 资源申请
  2. 资源使用
  3. 资源释放

这三个事件需要严格按顺序发生。

而资源安全关注的是:

  • 在使用前已经正确初始化
  • 使用后能被正确释放
  • 释放后不能被再次使用

语言(语法)提供的资源管理有什么用呢?它给程序员一个强有力的保证,非极端情况下(断电等),资源释放逻辑必定会被执行。

资源管理的历史

史前

C

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
#include <stdio.h>
#include <stdlib.h>

int main()
{
int num;
FILE *fptr;

if ((fptr = fopen("C:\\program.txt","r")) == NULL){
printf("Error! opening file");

// Program exits if the file pointer returns NULL.
exit(1);
}

fscanf(fptr,"%d", &num);

printf("Value of n=%d", num);
fclose(fptr);

return 0;
}

早期的计算机语言并没有意识到资源管理问题,依然是指令式编码风格,需要程序员自己保证资源的申请与释放。同时那个年代软件系统并不复杂,再加上从业者素质普遍较高,所以资源管理问题不像今天这么突出。

语言结构(Language constructs)

后来,有些语言引入了异常机制,允许程序无视控制流语句自行中断并跳出,资源管理也变得复杂起来。支持抛出异常的语言通常使用 try {} catch {} finally {} 约束资源作用范围, try 关键字表示资源作用范围,无论程序以任何形式跳出, finally 关键字标记代码块都应当被确保执行,资源正确释放,代表语法:

1
2
3
4
5
6
7
8
FileReader fr = new FileReader(path);
BufferedReader br = new BufferedReader(fr);
try {
return br.readLine();
} finally {
br.close();
fr.close();
}

销毁模式(Dispose pattern)

这个模式也是当今大部分有 GC 语言所支持的,比如 Java 语法关键字 try

据我所知 Java 所谓的资源安全只有 Java7 时代引入的 try-with-resource,资源需继承自AutoCloseable,可以在 try(...) { ... code block ... } 内放心使用,也就意味着异步代码没有任何保障。

1
2
3
4
5
6
static String readFirstLineFromFile(String path) throws IOException {
try (FileReader fr = new FileReader(path);
BufferedReader br = new BufferedReader(fr)) {
return br.readLine();
}
}

Resource Monad 模式

后续发展中诞生的较为安全的设计模式,将使用资源的同步或异步代码包裹在一个代码块中,使用结束后释放,这样可以避免在每次使用资源后手动关闭。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
public static <R> CompletionStage<R> use (Function<Resource, CompletionStage<R>> f) {
return Resource.make()
.thenCompose((res) -> f.apply(res)
.handle((r, e) -> {
res.close();
if (e != null) throw new RuntimeException(e);
return r;
})
);
}

use((res) -> {
System.out.println(res);
// 将业务逻辑编写到 CompletableFuture 内部执行
return CompletableFuture.failedStage(new Exception("error"));
}).handle((r, e) -> {
if (e != null) {
System.out.println(e.getMessage());
}
return r;
});

以上方案都有一个缺陷,在使用过程中误将资源变量共享给其他代码段(闭包,回调,外部变量,无意中发送给队列 HTTP Response,例如如下 Java 代码。

1
2
3
4
5
6
static HttpEntity fileEntity(String filename) throws IOException {
try (FileReader fr = new FileReader(path);
BufferedReader br = new BufferedReader(fr)) {
return new HttpEntity(br);
}
}

如果 HttpEntity 类并不是在创建时消费 Reader ,而是会等待 HTTP Body 传输时才开始读取字节流,那毫无疑问,这会造成访问已关闭资源,可能引起应用程序崩溃。

正确的写法如下

1
2
3
4
5
6
7
8
9
10
11

// 伪代码
static HttpEntity fileEntity(String filename) throws IOException {
final FileReader fr = new FileReader(path);
final BufferedReader br = new BufferedReader(fr);
return new HttpEntity(br) {
public void close() {
br.close();
fr.close();
}
};

RAII

C++

Move semantics make it possible to safely transfer resource ownership between objects, across scopes, and in and out of threads, while maintaining resource safety. — (since C++11)

1
2
3
4
5
6
7
8
9
10
11
12
13
14
void f()
{
vector<string> vs(100); // not std::vector: valid() added
if (!vs.valid()) {
// handle error or exit
}

ifstream fs("foo"); // not std::ifstream: valid() added
if (!fs.valid()) {
// handle error or exit
}

// ...
} // destructors clean up as usual

C++ 提出了 RAII 这一先进概念,几乎解决了资源安全问题。但是受限于 C++ 诞生年代,早期 C++ 为了保证资源安全,只支持左值引用(LValue Reference) + Clone(Deep Copy) 语义,使得赋值操作会频繁深拷贝整个对象与频繁构造/析构资源,浪费了很多操作。C++11 开始支持右值引用,但是仍然需要实现右值引用(RValue Reference)的 Move(Shallow Copy)。同时,C++ 无法检查多次 move 的问题和 move 后原始变量仍然可用的问题。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
#include <iostream>
using namespace std;

class A{
public:
A(const string& str, int* arr):_str(str),_arr(arr){cout << "parameter ctor" << endl;}
A(A&& obj):_str(std::move(obj._str)),_arr(std::move(obj._arr)){obj._arr = nullptr;cout << "move ctor" << endl;}
A& operator =(A&& rhs){
_str = std::move(rhs._str);
_arr = std::move(rhs._arr);
rhs._arr = nullptr;
cout << "move assignment operation" << endl;
return *this;
}
void print(){
cout << _str << endl;
}
~A(){
delete[] _arr;
cout << "dtor" << endl;
}
private:
string _str;
int* _arr;
};

int main(){
int* arr = new int[6] {1,1,4,5,1,4};
A a("Yajuu Senpai", std::move(arr)); // 错误的指针移动 --> STUPID MOVE!!
A b(std::move(a)); // move ctor

cout << "print a: ";
a.print(); // a 失去所有权 --> CORRECT!!
cout << "print b: ";
b.print(); // b 获得所有权 --> CORRECT!!

b = std::move(a); // 二次移动

cout << "print a: ";
a.print(); // ???
cout << "print b: ";
b.print(); // ???
}

Rust

继承自 C++ RAII ,当创建资源和使用资源不在同一个领域时,Rust 的 move / borrow 依然可以安心睡觉,这种语言级别的保证让我一个写 Scala 的看了都羡慕。

在Rust 中move 给别人就是别人负责 dropborrow 给别人还是自己负责 drop,且编译器会根据生命周期检查,确保不会发生多次 move,也不会有超出拥有者(owner)的借用(&borrow)发生。责任划分很清晰,只要自己脑子清醒,完全不担心异步的时候会泄漏。

程序员无法显式 delete,只能遵守 Rust 语法,编译器依据变量生命周期将相关变量的 drop 插入到正确位置,通常是离开块级作用域的位置。

Rust Drop

https://doc.rust-lang.org/rust-by-example/scope/raii.html

同步示例如下:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
struct Entity {
connection: &Connection
file: File
}

impl Drop for Entity {
fn drop(&mut self) {
file.write(EOF); // <- 文件关闭前写入终止符
}
}

let conn = makeConnection();
let file = openFile();
let entity = Entity {
connection: &conn, // <- 将 conn 借用给 entity
file: file // <- 将 file 所有权转移给 entity
} // <- 从此以后,file 将不可被访问
fn send(entity: Entity) {
// logic
return;
// <- 编译器会在此处插入释放 entity
}
send(entity) // <- 将 entity 所有权移交给 send 函数
// <- 编译器会在此处插入释放 conn
// <- 因 entity 与 file 所有权已转移,此处不会重复释放 entity 与 file

异步示例如下:

1
2
3
4
5
6
7
8
9
fn move_block() -> impl Future<Output = ()> {
let my_string = "foo".to_string();
async move { // <- my_string 被 move 到此 block scope
// ...
println!("{}", my_string); // <- my_string 在此处可以见
// <- 编译器将在此处插入 my_string drop 代码
}
// <- my_string 将不再可见,本函数失去 my_string 所有权,不再插入 drop 代码
}

闭包捕获造成的资源逃逸与将引用赋值给类/结构体帮助资源逃逸在语言本质上是同一个问题,所以 Rust 可以用相同的方法来处理它们。

总结

理论上来说垃圾回收器(Garbage collector)无法解决资源安全问题,可能有人会认为:

“给 Java 语言添加 Destructor ,这样开发者就可以在析构函数中实现资源释放逻辑,交给 GC 在回收内存时自动调用 destory()/dispose() ,问题不就解决了吗?“

实际上这条路是走不通的,GC 根据算法不同,所参考的策略也不同,其收集(collect)/释放(free)动作必定会执行,但没有保证什么时候会执行,以什么顺序执行。因为考察 GC 性能指标时,更关注的是吞吐量而不是回收实时性,如果内存没有压力,GC 倾向于不回收。

这一行为可能会导致意想不到的后果,比如业务逻辑 A 结束时,文件资源对象的引用计数已经为零,通知下一个逻辑 B 可以处理此文件,而 B 尝试打开文件时却发现文件不完整,因为关闭文件的系统调用尚未被 GC 执行😵。

Rust 给出了一套完善的解决方案,它不仅解决了诸多内存安全问题,还顺带解决了资源安全问题。基于所有权机制和严格的编译器检查,强迫程序员写出资源安全的代码,仅需要程序员正确实现 impl Drop for [...]

我认为 Rust 实现的所有权与 RAII 是当下最完善的资源管理机制。

  • try-catch-finally 相比,不需要在每次使用资源时,都格外小心是否双重释放(double delete 这在 Java 中是个很常见且令人头痛的问题)。
  • ResourceMonad 相比,不会产生资源逃逸。
  • 是一门全新的语言,不像 C++ 一样有沉重的历史包袱。

即使不使用 Rust 编码,依然可以借鉴它的思想,因为其语法本身就是资源管理的最佳实践,学习它可以帮助自己在其他语言中避开错误的写法。

Welcome to Hexo! This is your very first post. Check documentation for more info. If you get any problems when using Hexo, you can find the answer in troubleshooting or you can ask me on GitHub.

Quick Start

Create a new post

1
$ hexo new "My New Post"

More info: Writing

Run server

1
$ hexo server

More info: Server

Generate static files

1
$ hexo generate

More info: Generating

Deploy to remote sites

1
$ hexo deploy

More info: Deployment

Goals

  • Sending event message by http post
  • Recording success/error response from third-party reply.
  • It should re-send message after third-party reply error or down.

Features

  • Delivery (HTTP POST)
  • Success / Error Record
  • Re-send in the case of third-party failure.
  • Run in multi-process environment (e.g. Kubernetes)
  • Metrics of backlog message
  • Policy of record automatic archive
  • Prevent sent too fequencly after massive event generated.(throttle)

Results

Foreword

On April 17, 2020, I start a huge project to migrate the Future[T] from legacy playframework project to ZIO. Three months later, after tens of thousands of lines of code modification, the migration was a complete success. In September, I used this experience as the background to share in the Chinese Scala Meetup community with the title "Introduction to ZIO". In this share, I explained a fanatic imageation, 在这次分享中,我展望了一个美好的愿景,借由 ZIO 的类型参数 R 提供的抽象能力来实现代码的可移植性,和提高可测试性。但想在遗留项目中实现这个愿景并不容易,主要挑战来自遗留项目的代码耦合,和开发者的思维惯性。如今,这个愿景已经达成。我会在这篇 Post 中与你分享我的进化之路。

What is R

A ZIO[R, E, A] value is an immutable value that lazily describes a workflow or job. The workflow requires some environment R, and may fail with an error of type E, or succeed with a value of type A.

上面这段来自 ZIO 源码中的注释,其中指出 R 是 workflow 所需要的环境参数的类型。我们可以借助这个环境参数,来抽象出过程对环境的依赖。 这听起来非常像依赖注入,实际上确实如此。不同的是,常见的控制反转框架,注入入口都是业务逻辑类的成员以及其构造函数(比如:spring autowireguicemacwire 等);而 ZIO 的做法是,在运行时提供环对象的实例。

Example:

Spring @Autowired: Inject the dependent instance into the object as a member of the class

1
2
3
4
5
6
7
8
9
10
11
public class MovieRecommender {

private final CustomerPreferenceDao customerPreferenceDao;

@Autowired
public MovieRecommender(CustomerPreferenceDao customerPreferenceDao) {
this.customerPreferenceDao = customerPreferenceDao;
}

// ...
}
ZIO R: Treat the instance as part of the environment, and provide it on demand at runtime
1
2
3
4
5
6
7
object MovieRecommender {
def recommend(): ZIO[CustomerPreferenceDao, Throwable, RecommendResult] = {
...
}

// ...
}

What are the problems caused by DI tools?

I want to ask the reader a question: How much does it cost to test a small feature in your software system?

See: Negative example

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
package meetup.di.legacy

import meetup.di.legacy.member.{Accountant, Baker, CandyMaker, Cashier, Decorator, HunanChef, Logistic, Manager, Security, SichuanChef, Waiter, Washer}
import meetup.di.legacy.tools.{Mixer, Oven, PipingTip, Turntable}
import meetup.di.legacy.utils.Demo

object djx314 extends App with Demo {
val mixer = new Mixer()
val oven = new Oven()
val pipingTip = new PipingTip()
val turntable = new Turntable()
val baker = new Baker(oven, mixer)
val decorator = new Decorator(mixer, pipingTip, turntable)
val candyMaker = new CandyMaker
val scChef = new SichuanChef
val hnChef = new HunanChef
val cashier = new Cashier
val waiter = new Waiter
val washer = new Washer
val logistic = new Logistic
val security = new Security
val accountant = new Accountant
val manager = new Manager
val cs = new CakeShop(
baker, decorator, candyMaker,
scChef, hnChef, cashier,
waiter, washer, logistic,
security, accountant, manager
)

cs.simpleCake()
.onComplete(println)

}

I just want to test a small part of the functions, why do I need to construct all dependent instances. Why do I have to do so much preparation to test this simple function. Because it belongs to the method of the class, and this class has too many construction parameters, these construction parameters are unnecessary for the function we want to test

若想让工程代码最大程度上可移植、可测试,一个简单易行的方法是:不要在类中编写与对象无关(no use this)的函数,将他们移动到 object 中(in java: mark method static)。同时,编写引用透明的代码对达成这一目标有正面作用。

但是,在遗留项目中实现这一点有些困难, 因为大多数开发者都把依赖注入框架错用了,就像上面的反面教材一样。Even Spring contributors made the same mistake. See: Spring's sample project

Too many irrelevant dependencies have brought huge obstacles to the portability of codes, turning them into ad hoc codes that are difficult to test.

The whole system is like a balls made up of strings and knots. Software System like a ball made up of strings and knots

Current Situation

社区中的方案... zio layer 每次 unsafeRun 都会重新生成,这很纯函数式,但这不适合 web 服务。例如连接池

遇到的问题 * 连接池 * 信号量

所以,我只能自己尝试:

Evolution Stage 1

ZioController + PlayRunner

Evolution Stage 2

ZController + ZRunner

1
2
3
```

```scala

Evolution Stage 3

ZController + Runtime.Managed

1
2
3
4
5
6
7
8
9
10
11
12
13
trait ZController[Z, R[_], B] {
def runtime: Runtime.Managed[Z]

implicit class ActionBuilderOps(actionBuilder: ActionBuilder[R, B]) {
def zio[E](zioActionBody: => ZIO[Z, Throwable, Result]): Action[AnyContent] = actionBuilder.async {
runtime.unsafeRunToFuture(zioActionBody.resurrect)
}

def zio[E](zioActionBody: R[B] => ZIO[Z, Throwable, Result]): Action[B] = actionBuilder.async { req =>
runtime.unsafeRunToFuture(zioActionBody(req).resurrect)
}
}
}

1
2
3
4
/**
* A runtime that can be shutdown to release resources allocated to it.
*/
abstract class Managed[+R] extends Runtime[R] { /* ... */ }

使用起来

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
object ExampleController {
case class Config(endpoint: String)
def flow(endpoint: String, url: String, body: RawBuffer): ZIO[Has[WSClient], Throwable, String] = ???
}

class ExampleController(cc: ControllerComponents,
config: ExampleController.Config,
MAction: MemberAction)
(implicit val runtime: Runtime.Managed[Has[WSClient]],
val ec: ExecutionContext)
extends AbstractController(cc) with ZController[Has[WSClient], MemberRequest, RawBuffer] {

def handle(url: String): Action[RawBuffer] = MAction(parse.raw) zio { request: MemberRequest[RawBuffer] =>
flow(config.endpoint, url, request.body)
.map(name => Ok(name))
.mapError(e => InternalServerError("Oh no!\n" + e.getMessage))
.merge
}
}

Conclusion

References

Dependency Injection Trade-offs

0%