Browser Fingerprinting Using WebAssembly
Web client fingerprinting has become a widely used technique for uniquely identifying users, browsers, operating systems, and devices with high accuracy. While it is beneficial for applications such as fraud detection and personalized experiences, it also raises privacy concerns by enabling persistent tracking and detailed user profiling. This paper introduces an advanced fingerprinting method using WebAssembly (Wasm) - a low-level programming language that offers near-native execution speed in modern web browsers. With broad support across major browsers and growing adoption, WebAssembly provides a strong foundation for developing more effective fingerprinting methods. In this work, we present a new approach that leverages WebAssembly's computational capabilities to identify returning devices-such as smartphones, tablets, laptops, and desktops across different browsing sessions. Our method uses subtle differences in the WebAssembly JavaScript API implementation to distinguish between Chromium-based browsers like Google Chrome and Microsoft Edge, even when identifiers such as the User-Agent are completely spoofed, achieving a false-positive rate of less than 1%. The fingerprint is generated using a combination of CPU-bound operations, memory tasks, and I/O activities to capture unique browser behaviors. We validate this approach on a variety of platforms, including Intel, AMD, and ARM CPUs, operating systems such as Windows, macOS, Android, and iOS, and in environments like VMWare, KVM, and VirtualBox. Extensive evaluation shows that WebAssembly-based fingerprinting significantly improves identification accuracy. We also propose mitigation strategies to reduce the privacy risks associated with this method, which could be integrated into future browser designs to better protect user privacy.
Updated: 2025-05-31 21:39:17
标题: 使用WebAssembly进行浏览器指纹识别
摘要: Web客户端指纹识别已成为一种广泛使用的技术,可以准确识别用户、浏览器、操作系统和设备。虽然对于欺诈检测和个性化体验等应用非常有益,但这也引发了隐私问题,因为它能够实现持续跟踪和详细用户画像。本文介绍了一种使用WebAssembly(Wasm)的高级指纹识别方法,这是一种低级编程语言,在现代网络浏览器中具有接近本机执行速度。WebAssembly在主要浏览器上得到广泛支持,并且受到越来越多的采用,为开发更有效的指纹识别方法提供了坚实基础。 在这项工作中,我们提出了一种新方法,利用WebAssembly的计算能力来识别返回设备,例如智能手机、平板电脑、笔记本电脑和台式电脑在不同的浏览会话中。我们的方法利用WebAssembly JavaScript API实现中的微妙差异来区分基于Chromium的浏览器,如谷歌Chrome和微软Edge,甚至在完全伪装了用户代理等标识符的情况下,实现了低于1%的误报率。指纹是通过组合CPU绑定操作、内存任务和I/O活动生成的,以捕获独特的浏览器行为。我们在各种平台上验证了这种方法,包括Intel、AMD和ARM CPU,操作系统如Windows、macOS、Android和iOS,以及VMWare、KVM和VirtualBox等环境。广泛的评估显示,基于WebAssembly的指纹识别显著提高了识别准确性。我们还提出了减少与该方法相关的隐私风险的缓解策略,这些策略可以整合到未来浏览器设计中,以更好地保护用户隐私。
更新时间: 2025-05-31 21:39:17
领域: cs.CR
Security Concerns for Large Language Models: A Survey
Large Language Models (LLMs) such as GPT-4 and its recent iterations, Google's Gemini, Anthropic's Claude 3 models, and xAI's Grok have caused a revolution in natural language processing, but their capabilities also introduce new security vulnerabilities. In this survey, we provide a comprehensive overview of the emerging security concerns around LLMs, categorizing threats into prompt injection and jailbreaking, adversarial attacks such as input perturbations and data poisoning, misuse by malicious actors for purposes such as generating disinformation, phishing emails, and malware, and worrisome risks inherent in autonomous LLM agents. A significant focus has been recently placed on the latter, exploring goal misalignment, emergent deception, self-preservation instincts, and the potential for LLMs to develop and pursue covert, misaligned objectives, a behavior known as scheming, which may even persist through safety training. We summarize recent academic and industrial studies from 2022 to 2025 that exemplify each threat, analyze proposed defenses and their limitations, and identify open challenges in securing LLM-based applications. We conclude by emphasizing the importance of advancing robust, multi-layered security strategies to ensure LLMs are safe and beneficial.
Updated: 2025-05-31 21:25:09
标题: 大型语言模型的安全问题:一项调查
摘要: 大型语言模型(LLMs)如GPT-4及其最新版本、谷歌的Gemini、Anthropic的Claude 3模型和xAI的Grok已经在自然语言处理领域引起了革命,但它们的能力也引入了新的安全漏洞。在这份调查中,我们提供了关于LLMs周围新兴安全问题的全面概述,将威胁分类为提示注入和越狱、对抗性攻击(如输入扰动和数据毒害)、恶意行为者滥用LLMs来生成虚假信息、钓鱼邮件和恶意软件,以及自主LLM代理固有的令人担忧的风险。最近,重点放在后者上,探讨目标错位、出现的欺骗、自我保护本能以及LLMs发展和追求秘密、目标错位的潜力,这种行为被称为策划,甚至可能在安全培训中持续存在。我们总结了2022年至2025年的最新学术和工业研究,例举了每种威胁,分析了提出的防御措施及其局限性,并确定了保护基于LLM的应用程序面临的开放性挑战。最后,我们强调了推进强大、多层次安全策略的重要性,以确保LLMs是安全且有益的。
更新时间: 2025-05-31 21:25:09
领域: cs.CR,cs.AI
Federated Learning for Smart Grid: A Survey on Applications and Potential Vulnerabilities
The Smart Grid (SG) is a critical energy infrastructure that collects real-time electricity usage data to forecast future energy demands using information and communication technologies (ICT). Due to growing concerns about data security and privacy in SGs, federated learning (FL) has emerged as a promising training framework. FL offers a balance between privacy, efficiency, and accuracy in SGs by enabling collaborative model training without sharing private data from IoT devices. In this survey, we thoroughly review recent advancements in designing FL-based SG systems across three stages: generation, transmission and distribution, and consumption. Additionally, we explore potential vulnerabilities that may arise when implementing FL in these stages. Furthermore, we discuss the gap between state-of-the-art (SOTA) FL research and its practical applications in SGs, and we propose future research directions. Unlike traditional surveys addressing security issues in centralized machine learning methods for SG systems, this survey is the first to specifically examine the applications and security concerns unique to FL-based SG systems. We also introduce FedGridShield, an open-source framework featuring implementations of SOTA attack and defense methods. Our aim is to inspire further research into applications and improvements in the robustness of FL-based SG systems.
Updated: 2025-05-31 21:24:57
标题: 标题翻译为:智能电网的联邦学习:应用和潜在漏洞调查
摘要: 智能电网(SG)是一个关键的能源基础设施,利用信息和通信技术(ICT)收集实时电力使用数据,以预测未来能源需求。由于对SG中数据安全和隐私的忧虑日益增加,联邦学习(FL)已经成为一种有前途的训练框架。FL通过实现协作模型训练,而无需共享来自物联网设备的私人数据,在SG中实现隐私、效率和准确性的平衡。在这项调查中,我们全面审查了最近设计的基于FL的SG系统在三个阶段(发电、输电和配电以及消费)的最新进展。此外,我们探讨了在这些阶段实施FL时可能出现的潜在漏洞。此外,我们讨论了SOTA FL研究与其在SG中实际应用之间的差距,并提出未来的研究方向。与传统调查不同,这项调查着重于专门审查基于FL的SG系统的应用和安全问题。我们还介绍了FedGridShield,一个开源框架,具有SOTA攻击和防御方法的实现。我们的目标是激发对基于FL的SG系统的应用和改进鲁棒性的进一步研究。
更新时间: 2025-05-31 21:24:57
领域: cs.LG,cs.CR,C.2.4
Improving the Context Length and Efficiency of Code Retrieval for Tracing Security Vulnerability Fixes
An upstream task for software bill-of-materials (SBOMs) is the accurate localization of the patch that fixes a vulnerability. Nevertheless, existing work reveals a significant gap in the CVEs whose patches exist but are not traceable. Existing works have proposed several approaches to trace/retrieve the patching commit for fixing a CVE. However, they suffer from two major challenges: (1) They cannot effectively handle long diff code of a commit; (2) We are not aware of existing work that scales to the full repository with satisfactory accuracy. Upon identifying this gap, we propose SITPatchTracer, a scalable and effective retrieval system for tracing known vulnerability patches. To handle the context length challenge, SITPatchTracer proposes a novel hierarchical embedding technique which efficiently extends the context coverage to 6x that of existing work while covering all files in the commit. To handle the scalability challenge, SITPatchTracer utilizes a three-phase framework, balancing the effectiveness/efficiency in each phase. The evaluation of SITPatchTracer demonstrates it outperforms existing patch tracing methods (PatchFinder, PatchScout, VFCFinder) by a large margin. Furthermore, SITPatchTracer outperforms VoyageAI, the SOTA commercial code embedding LLM (\$1.8 per 10K commits) on the MRR and Recall@10 by 18\% and 28\% on our two datasets. Using SITPatchTracer, we have successfully traced and merged the patch links for 35 new CVEs in the GitHub Advisory database Our ablation study reveals that hierarchical embedding is a practically effective way of handling long context for patch retrieval.
Updated: 2025-05-31 19:45:52
标题: 改进代码检索的上下文长度和效率以追踪安全漏洞修复
摘要: 软件物料清单(SBOMs)的一个上游任务是准确定位修复漏洞的补丁。然而,现有工作揭示了存在补丁存在但无法追踪的CVE的重大差距。现有研究提出了几种方法来追踪/检索修复CVE的补丁提交。然而,它们面临两个主要挑战:(1)无法有效处理提交的长差异代码;(2)我们不知道任何现有工作可以满足对完整仓库的可扩展性和令人满意的准确性。在确定这一差距后,我们提出了SITPatchTracer,一个可扩展且有效的检索系统,用于追踪已知漏洞补丁。为了解决上下文长度挑战,SITPatchTracer提出了一种新颖的分层嵌入技术,有效地将上下文覆盖范围扩展到现有工作的6倍,同时涵盖提交中的所有文件。为了解决扩展性挑战,SITPatchTracer利用了一个三阶段框架,在每个阶段平衡了效果/效率。 SITPatchTracer的评估表明,它在追踪补丁方面的性能远远优于现有的方法(PatchFinder,PatchScout,VFCFinder)。此外,SITPatchTracer在我们的两个数据集上比SOTA商业代码嵌入LLM(每10K提交1.8美元)的MRR和Recall@10性能提高了18%和28%。使用SITPatchTracer,我们成功追踪并合并了GitHub咨询数据库中35个新CVE的补丁链接。我们的消融研究表明,分层嵌入是处理长上下文进行补丁检索的一种实际有效的方法。
更新时间: 2025-05-31 19:45:52
领域: cs.CR,cs.SE
An Adversarial Perspective on Machine Unlearning for AI Safety
Large language models are finetuned to refuse questions about hazardous knowledge, but these protections can often be bypassed. Unlearning methods aim at completely removing hazardous capabilities from models and make them inaccessible to adversaries. This work challenges the fundamental differences between unlearning and traditional safety post-training from an adversarial perspective. We demonstrate that existing jailbreak methods, previously reported as ineffective against unlearning, can be successful when applied carefully. Furthermore, we develop a variety of adaptive methods that recover most supposedly unlearned capabilities. For instance, we show that finetuning on 10 unrelated examples or removing specific directions in the activation space can recover most hazardous capabilities for models edited with RMU, a state-of-the-art unlearning method. Our findings challenge the robustness of current unlearning approaches and question their advantages over safety training.
Updated: 2025-05-31 19:22:41
标题: 一种关于AI安全的机器遗忘的对抗性观点
摘要: 大型语言模型经过微调以拒绝有关危险知识的问题,但这些保护措施通常可以被绕过。取消学习方法旨在彻底从模型中删除危险功能,并使其对攻击者不可访问。这项工作从对抗角度挑战了取消学习和传统安全训练之间的根本差异。我们证明了现有的越狱方法,先前被认为对取消学习无效,可以在谨慎应用时取得成功。此外,我们开发了一系列适应性方法,可以恢复大部分据称已取消学习的功能。例如,我们证明了在10个不相关的例子上进行微调或在激活空间中删除特定方向可以恢复使用RMU,一种最先进的取消学习方法编辑的模型的大部分危险功能。我们的发现挑战了当前取消学习方法的稳健性,并质疑它们相对于安全训练的优势。
更新时间: 2025-05-31 19:22:41
领域: cs.LG,cs.AI,cs.CL,cs.CR
Review of Blockchain-Based Approaches to Spent Fuel Management in Nuclear Power Plants
This study addresses critical challenges in managing the transportation of spent nuclear fuel, including inadequate data transparency, stringent confidentiality requirements, and a lack of trust among collaborating parties, issues prevalent in traditional centralized management systems. Given the high risks involved, balancing data confidentiality with regulatory transparency is imperative. To overcome these limitations, a prototype system integrating blockchain technology and the Internet of Things (IoT) is proposed, featuring a multi-tiered consortium chain architecture. This system utilizes IoT sensors for real-time data collection, which is immutably recorded on the blockchain, while a hierarchical data structure (operational, supervisory, and public layers) manages access for diverse stakeholders. The results demonstrate that this approach significantly enhances data immutability, enables real-time multi-sensor data integration, improves decentralized transparency, and increases resilience compared to traditional systems. Ultimately, this blockchain-IoT framework improves the safety, transparency, and efficiency of spent fuel transportation, effectively resolving the conflict between confidentiality and transparency in nuclear data management and offering significant practical implications.
Updated: 2025-05-31 19:09:15
标题: 区块链技术在核电厂废物管理中的应用方法综述
摘要: 这项研究解决了管理核废料运输中的关键挑战,包括数据透明度不足、严格的保密要求以及合作方之间缺乏信任等问题,这些问题在传统的集中管理系统中普遍存在。鉴于涉及的风险较高,平衡数据保密性和监管透明度至关重要。为了克服这些限制,提出了一个整合区块链技术和物联网(IoT)的原型系统,采用多层次的财团链架构。该系统利用物联网传感器进行实时数据收集,这些数据被不可篡改地记录在区块链上,同时采用分层数据结构(操作、监督和公共层)管理各种利益相关者的访问。结果表明,这种方法显著增强了数据的不可变性,实现了实时多传感器数据集成,改善了去中心化透明度,并相对于传统系统提高了韧性。最终,这种区块链-IoT框架提高了核废料运输的安全性、透明度和效率,有效解决了核数据管理中保密性和透明度之间的冲突,并提供了重要的实际意义。
更新时间: 2025-05-31 19:09:15
领域: cs.CR,cs.ET,physics.app-ph
AdvAgent: Controllable Blackbox Red-teaming on Web Agents
Foundation model-based agents are increasingly used to automate complex tasks, enhancing efficiency and productivity. However, their access to sensitive resources and autonomous decision-making also introduce significant security risks, where successful attacks could lead to severe consequences. To systematically uncover these vulnerabilities, we propose AdvAgent, a black-box red-teaming framework for attacking web agents. Unlike existing approaches, AdvAgent employs a reinforcement learning-based pipeline to train an adversarial prompter model that optimizes adversarial prompts using feedback from the black-box agent. With careful attack design, these prompts effectively exploit agent weaknesses while maintaining stealthiness and controllability. Extensive evaluations demonstrate that AdvAgent achieves high success rates against state-of-the-art GPT-4-based web agents across diverse web tasks. Furthermore, we find that existing prompt-based defenses provide only limited protection, leaving agents vulnerable to our framework. These findings highlight critical vulnerabilities in current web agents and emphasize the urgent need for stronger defense mechanisms. We release code at https://ai-secure.github.io/AdvAgent/.
Updated: 2025-05-31 18:34:01
标题: AdvAgent: 可控黑盒红队测试Web代理
摘要: 基于基础模型的代理越来越被用于自动化复杂任务,提高效率和生产力。然而,他们对敏感资源的访问和自主决策也引入了重大安全风险,成功的攻击可能导致严重后果。为了系统地揭示这些漏洞,我们提出了AdvAgent,一个针对网络代理进行黑盒红队攻击的框架。与现有方法不同,AdvAgent采用基于强化学习的流水线来训练一个对抗提示模型,利用来自黑盒代理的反馈来优化对抗提示。通过精心设计攻击,这些提示有效地利用代理的弱点,同时保持隐蔽性和可控性。广泛的评估表明,AdvAgent在各种网络任务中对基于最新技术的GPT-4网络代理取得高成功率。此外,我们发现现有基于提示的防御措施只提供有限的保护,使代理容易受到我们框架的攻击。这些发现突显了当前网络代理的关键漏洞,并强调了对更强大防御机制的迫切需求。我们在https://ai-secure.github.io/AdvAgent/发布了代码。
更新时间: 2025-05-31 18:34:01
领域: cs.CR,cs.CL
Organizational Adaptation to Generative AI in Cybersecurity: A Systematic Review
Cybersecurity organizations are adapting to GenAI integration through modified frameworks and hybrid operational processes, with success influenced by existing security maturity, regulatory requirements, and investments in human capital and infrastructure. This qualitative research employs systematic document analysis and comparative case study methodology to examine how cybersecurity organizations adapt their threat modeling frameworks and operational processes to address generative artificial intelligence integration. Through examination of 25 studies from 2022 to 2025, the research documents substantial transformation in organizational approaches to threat modeling, moving from traditional signature-based systems toward frameworks incorporating artificial intelligence capabilities. The research identifies three primary adaptation patterns: Large Language Model integration for security applications, GenAI frameworks for risk detection and response automation, and AI/ML integration for threat hunting. Organizations with mature security infrastructures, particularly in finance and critical infrastructure sectors, demonstrate higher readiness through structured governance approaches, dedicated AI teams, and robust incident response processes. Organizations achieve successful GenAI integration when they maintain appropriate human oversight of automated systems, address data quality concerns and explainability requirements, and establish governance frameworks tailored to their specific sectors. Organizations encounter ongoing difficulties with privacy protection, bias reduction, personnel training, and defending against adversarial attacks. This work advances understanding of how organizations adopt innovative technologies in high-stakes environments and offers actionable insights for cybersecurity professionals implementing GenAI systems.
Updated: 2025-05-31 18:16:11
标题: 组织对生成式人工智能在网络安全领域的适应性:一项系统性回顾
摘要: 网络安全组织正在通过修改框架和混合运营流程来适应GenAI集成,成功与现有安全成熟度、监管要求以及对人力资本和基础设施的投资有关。这项定性研究采用系统文件分析和比较案例研究方法,研究网络安全组织如何调整其威胁建模框架和运营流程以应对生成式人工智能集成。通过对2022年至2025年的25项研究进行研究,研究记录了组织在威胁建模方面的重大转变,从传统的基于签名的系统向融合人工智能功能的框架发展。研究确定了三种主要的适应模式:用于安全应用的大型语言模型集成,用于风险检测和响应自动化的GenAI框架,以及用于威胁猎杀的AI/ML集成。金融和关键基础设施部门拥有成熟安全基础设施的组织,通过结构化治理方法、专门的人工智能团队和强大的事件响应流程展示出更高的准备性。组织在维持自动化系统的适当人工监督、解决数据质量问题和可解释性要求,并建立适合其特定领域的治理框架时实现了成功的GenAI集成。组织在隐私保护、减少偏见、人员培训以及抵御对抗性攻击方面遇到持续困难。这项工作推动了对组织如何在高风险环境中采用创新技术的理解,并为实施GenAI系统的网络安全专业人员提供可操作的见解。
更新时间: 2025-05-31 18:16:11
领域: cs.CR,cs.AI,cs.CY,K.6.5; I.2.0; K.4.1
PackHero: A Scalable Graph-based Approach for Efficient Packer Identification
Anti-analysis techniques, particularly packing, challenge malware analysts, making packer identification fundamental. Existing packer identifiers have significant limitations: signature-based methods lack flexibility and struggle against dynamic evasion, while Machine Learning approaches require extensive training data, limiting scalability and adaptability. Consequently, achieving accurate and adaptable packer identification remains an open problem. This paper presents PackHero, a scalable and efficient methodology for identifying packers using a novel static approach. PackHero employs a Graph Matching Network and clustering to match and group Call Graphs from programs packed with known packers. We evaluate our approach on a public dataset of malware and benign samples packed with various packers, demonstrating its effectiveness and scalability across varying sample sizes. PackHero achieves a macro-average F1-score of 93.7% with just 10 samples per packer, improving to 98.3% with 100 samples. Notably, PackHero requires fewer samples to achieve stable performance compared to other Machine Learning-based tools. Overall, PackHero matches the performance of State-of-the-art signature-based tools, outperforming them in handling Virtualization-based packers such as Themida/Winlicense, with a recall of 100%.
Updated: 2025-05-31 18:01:50
标题: PackHero:一种用于高效包装器识别的可扩展基于图的方法
摘要: Anti-analysis技术,尤其是打包技术,挑战了恶意软件分析人员,使得打包器的识别成为基本问题。现有的打包器识别方法存在显著的局限性:基于签名的方法缺乏灵活性,并且在动态逃避方面存在困难,而机器学习方法需要大量的训练数据,限制了可扩展性和适应性。因此,实现准确和适应性强的打包器识别仍然是一个未解决的问题。本文介绍了PackHero,一种用于识别打包器的可扩展和高效的方法。PackHero采用了一种新颖的静态方法,利用图匹配网络和聚类来匹配和分组使用已知打包器打包的程序的调用图。我们在一个包含各种打包器的恶意软件和良性样本的公共数据集上评估了我们的方法,展示了其在不同样本规模下的有效性和可扩展性。PackHero在每个打包器仅有10个样本的情况下实现了93.7%的宏平均F1分数,在100个样本时提高到98.3%。值得注意的是,与其他基于机器学习的工具相比,PackHero需要更少的样本来实现稳定的性能。总的来说,PackHero与最先进的基于签名的工具性能相匹敵,在处理基于虚拟化的打包器(如Themida/Winlicense)方面表现更优,召回率达到100%。
更新时间: 2025-05-31 18:01:50
领域: cs.CR,cs.LG
Amatriciana: Exploiting Temporal GNNs for Robust and Efficient Money Laundering Detection
Money laundering is a financial crime that poses a serious threat to financial integrity and social security. The growing number of transactions makes it necessary to use automatic tools that help law enforcement agencies detect such criminal activity. In this work, we present Amatriciana, a novel approach based on Graph Neural Networks to detect money launderers inside a graph of transactions by considering temporal information. Amatriciana uses the whole graph of transactions without splitting it into several time-based subgraphs, exploiting all relational information in the dataset. Our experiments on a public dataset reveal that the model can learn from a limited amount of data. Furthermore, when more data is available, the model outperforms other State-of-the-art approaches; in particular, Amatriciana decreases the number of False Positives (FPs) while detecting many launderers. In summary, Amatriciana achieves an F1 score of 0.76. In addition, it lowers the FPs by 55% with respect to other State-of-the-art models.
Updated: 2025-05-31 17:47:29
标题: Amatriciana: 利用时间性GNNs进行稳健高效的洗钱检测
摘要: 洗钱是一种严重威胁金融诚信和社会安全的金融犯罪。交易数量不断增加,需要使用自动工具帮助执法机构检测此类犯罪活动。在这项工作中,我们提出了一种名为Amatriciana的新方法,基于图神经网络,通过考虑时间信息来检测交易图中的洗钱者。Amatriciana使用整个交易图,而不将其分成几个基于时间的子图,充分利用数据集中的所有关系信息。我们在一个公共数据集上的实验显示,该模型可以从有限数量的数据中学习。此外,当有更多数据可用时,该模型胜过其他最先进的方法;特别是,Amatriciana减少了假阳性(FPs)的数量,同时检测到了许多洗钱者。总之,Amatriciana实现了0.76的F1得分。此外,与其他最先进模型相比,它将FPs降低了55%。
更新时间: 2025-05-31 17:47:29
领域: cs.CR,cs.LG
Video Signature: In-generation Watermarking for Latent Video Diffusion Models
The rapid development of Artificial Intelligence Generated Content (AIGC) has led to significant progress in video generation but also raises serious concerns about intellectual property protection and reliable content tracing. Watermarking is a widely adopted solution to this issue, but existing methods for video generation mainly follow a post-generation paradigm, which introduces additional computational overhead and often fails to effectively balance the trade-off between video quality and watermark extraction. To address these issues, we propose Video Signature (VIDSIG), an in-generation watermarking method for latent video diffusion models, which enables implicit and adaptive watermark integration during generation. Specifically, we achieve this by partially fine-tuning the latent decoder, where Perturbation-Aware Suppression (PAS) pre-identifies and freezes perceptually sensitive layers to preserve visual quality. Beyond spatial fidelity, we further enhance temporal consistency by introducing a lightweight Temporal Alignment module that guides the decoder to generate coherent frame sequences during fine-tuning. Experimental results show that VIDSIG achieves the best overall performance in watermark extraction, visual quality, and generation efficiency. It also demonstrates strong robustness against both spatial and temporal tampering, highlighting its practicality in real-world scenarios.
Updated: 2025-05-31 17:43:54
标题: 视频签名:潜在视频扩散模型的同代水印技术
摘要: 人工智能生成内容(AIGC)的快速发展推动了视频生成方面取得了显著进展,但也引发了对知识产权保护和可靠内容追踪的严重关注。数字水印是解决这一问题的广泛采用方法,但现有的视频生成方法主要遵循后生成范式,这引入了额外的计算开销,并经常无法有效平衡视频质量和水印提取之间的权衡。为了解决这些问题,我们提出了视频签名(VIDSIG),这是一种用于潜在视频扩散模型的生成中水印方法,它在生成过程中实现了隐式和自适应的水印集成。具体地,我们通过部分微调潜在解码器来实现这一点,其中感知敏感层的扰动感知抑制(PAS)预先识别并冻结,以保留视觉质量。除了空间保真度,我们还通过引入轻量级的时间对齐模块进一步增强了时间一致性,该模块指导解码器在微调过程中生成连贯的帧序列。实验结果表明,VIDSIG在水印提取、视觉质量和生成效率方面取得了最佳综合性能。它还展示了对空间和时间篡改的强大鲁棒性,突显了在实际场景中的实用性。
更新时间: 2025-05-31 17:43:54
领域: cs.CV,cs.CR
Rethinking Backdoor Attacks on Dataset Distillation: A Kernel Method Perspective
Dataset distillation offers a potential means to enhance data efficiency in deep learning. Recent studies have shown its ability to counteract backdoor risks present in original training samples. In this study, we delve into the theoretical aspects of backdoor attacks and dataset distillation based on kernel methods. We introduce two new theory-driven trigger pattern generation methods specialized for dataset distillation. Following a comprehensive set of analyses and experiments, we show that our optimization-based trigger design framework informs effective backdoor attacks on dataset distillation. Notably, datasets poisoned by our designed trigger prove resilient against conventional backdoor attack detection and mitigation methods. Our empirical results validate that the triggers developed using our approaches are proficient at executing resilient backdoor attacks.
Updated: 2025-05-31 16:47:36
标题: 重新思考对数据集精炼的后门攻击:一个核方法的视角
摘要: 数据集精炼为深度学习中提高数据效率提供了潜在手段。最近的研究表明它能够对抗原始训练样本中存在的后门风险。在本研究中,我们深入探讨基于核方法的后门攻击和数据集精炼的理论方面。我们引入了两种新的基于理论驱动的触发模式生成方法,专门用于数据集精炼。在进行了一系列全面的分析和实验后,我们展示了我们基于优化的触发器设计框架如何指导数据集精炼中的有效后门攻击。值得注意的是,我们设计的触发器污染的数据集对传统后门攻击检测和缓解方法表现出了弹性。我们的实证结果验证了使用我们方法开发的触发器在执行弹性后门攻击方面的能力。
更新时间: 2025-05-31 16:47:36
领域: cs.LG,cs.CR,68T05
Communication Efficient Multiparty Private Set Intersection from Multi-Point Sequential OPRF
Multiparty private set intersection (MPSI) allows multiple participants to compute the intersection of their locally owned data sets without revealing them. MPSI protocols can be categorized based on the network topology of nodes, with the star, mesh, and ring topologies being the primary types, respectively. Given that star and mesh topologies dominate current implementations, most existing MPSI protocols are based on these two topologies. However, star-topology MPSI protocols suffer from high leader node load, while mesh topology protocols suffer from high communication complexity and overhead. In this paper, we first propose a multi-point sequential oblivious pseudorandom function (MP-SOPRF) in a multi-party setting. Based on MP-SOPRF, we then develop an MPSI protocol with a ring topology, addressing the challenges of communication and computational overhead in existing protocols. We prove that our MPSI protocol is semi-honest secure under the Hamming correlation robustness assumption. Our experiments demonstrate that our MPSI protocol outperforms state-of-the-art protocols, achieving a reduction of 74.8% in communication and a 6% to 287% improvement in computational efficiency.
Updated: 2025-05-31 13:50:40
标题: 多方私有集合交集的高效通信通信:来自多点顺序OPRF
摘要: 多方私有集合交集(MPSI)允许多个参与者计算其本地拥有的数据集的交集,而无需透露它们。 MPSI协议可以根据节点的网络拓扑进行分类,星形、网状和环形拓扑分别是主要类型。鉴于当前实现中星形和网状拓扑占主导地位,大多数现有的MPSI协议都基于这两种拓扑。然而,星形拓扑的MPSI协议受到领导节点负载高的影响,而网状拓扑协议受到通信复杂性和开销高的影响。在本文中,我们首先提出了在多方设置中的多点顺序遗忘伪随机函数(MP-SOPRF)。基于MP-SOPRF,我们进一步开发了一个具有环形拓扑的MPSI协议,解决了现有协议中通信和计算开销的挑战。我们证明了我们的MPSI协议在汉明相关性鲁棒性假设下是半诚实安全的。我们的实验表明,我们的MPSI协议优于最先进的协议,在通信效率方面实现了74.8%的减少,计算效率方面实现了6%至287%的提高。
更新时间: 2025-05-31 13:50:40
领域: cs.CR
Con Instruction: Universal Jailbreaking of Multimodal Large Language Models via Non-Textual Modalities
Existing attacks against multimodal language models (MLLMs) primarily communicate instructions through text accompanied by adversarial images. In contrast, we exploit the capabilities of MLLMs to interpret non-textual instructions, specifically, adversarial images or audio generated by our novel method, Con Instruction. We optimize these adversarial examples to align closely with target instructions in the embedding space, revealing the detrimental implications of MLLMs' sophisticated understanding. Unlike prior work, our method does not require training data or preprocessing of textual instructions. While these non-textual adversarial examples can effectively bypass MLLM safety mechanisms, their combination with various text inputs substantially amplifies attack success. We further introduce a new Attack Response Categorization (ARC) framework, which evaluates both the quality of the model's response and its relevance to the malicious instructions. Experimental results demonstrate that Con Instruction effectively bypasses safety mechanisms in multiple vision- and audio-language models, including LLaVA-v1.5, InternVL, Qwen-VL, and Qwen-Audio, evaluated on two standard benchmarks: AdvBench and SafeBench. Specifically, our method achieves the highest attack success rates, reaching 81.3% and 86.6% on LLaVA-v1.5 (13B). On the defense side, we explore various countermeasures against our attacks and uncover a substantial performance gap among existing techniques. Our implementation is made publicly available.
Updated: 2025-05-31 13:11:14
标题: Con Instruction: 通过非文本模态的方法解锁多模态大型语言模型
摘要: 现有的对多模式语言模型(MLLMs)的攻击主要通过文本和伴随的对抗性图像传达指令。相比之下,我们利用MLLMs的能力来解释非文本指令,具体来说,是通过我们的新方法Con Instruction 生成的对抗性图像或音频。我们优化这些对抗性示例,使其在嵌入空间中与目标指令密切对齐,揭示了MLLMs复杂理解的有害影响。与先前的工作不同,我们的方法不需要训练数据或对文本指令进行预处理。虽然这些非文本对抗性示例可以有效地绕过MLLMs的安全机制,但与各种文本输入的结合大大增加了攻击成功率。我们进一步引入了一种新的攻击响应分类(ARC)框架,评估模型响应的质量及其与恶意指令的相关性。实验结果表明,Con Instruction 在多个视觉和音频语言模型上有效地绕过了安全机制,包括 LLaVA-v1.5、InternVL、Qwen-VL 和 Qwen-Audio,在两个标准基准测试中评估:AdvBench 和 SafeBench。具体而言,我们的方法在 LLaVA-v1.5(13B)上取得了最高的攻击成功率,分别达到了 81.3% 和 86.6%。在防御方面,我们探索了各种对抗我们攻击的对策,并发现了现有技术之间存在实质性的性能差距。我们的实现已公开可用。
更新时间: 2025-05-31 13:11:14
领域: cs.CR,cs.CL,cs.LG
Docker under Siege: Securing Containers in the Modern Era
Containerization, driven by Docker, has transformed application development and deployment by enhancing efficiency and scalability. However, the rapid adoption of container technologies introduces significant security challenges that require careful management. This paper investigates key areas of container security, including runtime protection, network safeguards, configuration best practices, supply chain security, and comprehensive monitoring and logging solutions. We identify common vulnerabilities within these domains and provide actionable recommendations to address and mitigate these risks. By integrating security throughout the Software Development Lifecycle (SDLC), organizations can reinforce their security posture, creating a resilient and reliable containerized application infrastructure that withstands evolving threats.
Updated: 2025-05-31 13:00:52
标题: Docker受围攻:在现代时代保护容器
摘要: 集装箱技术,由Docker推动,已经通过提高效率和可扩展性,改变了应用程序开发和部署。然而,容器技术的快速采用引入了重要的安全挑战,需要谨慎管理。本文调查了容器安全的关键领域,包括运行时保护、网络保障、配置最佳实践、供应链安全以及全面的监控和日志解决方案。我们确定了这些领域内的常见漏洞,并提供可操作的建议,以解决和减轻这些风险。通过在软件开发生命周期(SDLC)中整合安全性,组织可以加强其安全姿态,创建一个具有弹性和可靠性的容器化应用基础设施,以抵御不断演变的威胁。
更新时间: 2025-05-31 13:00:52
领域: cs.CR
The Security Threat of Compressed Projectors in Large Vision-Language Models
The choice of a suitable visual language projector (VLP) is critical to the successful training of large visual language models (LVLMs). Mainstream VLPs can be broadly categorized into compressed and uncompressed projectors, and each offering distinct advantages in performance and computational efficiency. However, their security implications have not been thoroughly examined. Our comprehensive evaluation reveals significant differences in their security profiles: compressed projectors exhibit substantial vulnerabilities, allowing adversaries to successfully compromise LVLMs even with minimal knowledge of structural information. In stark contrast, uncompressed projectors demonstrate robust security properties and do not introduce additional vulnerabilities. These findings provide critical guidance for researchers in selecting optimal VLPs that enhance the security and reliability of visual language models. The code will be released.
Updated: 2025-05-31 12:43:56
标题: 大型视觉-语言模型中压缩投影仪的安全威胁
摘要: 选择适合的视觉语言投影仪(VLP)对于成功训练大型视觉语言模型(LVLMs)至关重要。主流VLP可以大致分为压缩和非压缩投影仪,每种在性能和计算效率方面都具有明显优势。然而,它们的安全性影响尚未得到全面检查。我们的综合评估揭示了它们安全性特征的显著差异:压缩投影仪存在重大漏洞,使得对手甚至在了解结构信息很少的情况下就能成功破坏LVLMs。相比之下,非压缩投影仪展现出强大的安全特性,不引入额外漏洞。这些发现为研究人员在选择能增强视觉语言模型安全性和可靠性的最佳VLP提供了重要指导。该代码将会发布。
更新时间: 2025-05-31 12:43:56
领域: cs.CR,cs.AI
The TIP of the Iceberg: Revealing a Hidden Class of Task-in-Prompt Adversarial Attacks on LLMs
We present a novel class of jailbreak adversarial attacks on LLMs, termed Task-in-Prompt (TIP) attacks. Our approach embeds sequence-to-sequence tasks (e.g., cipher decoding, riddles, code execution) into the model's prompt to indirectly generate prohibited inputs. To systematically assess the effectiveness of these attacks, we introduce the PHRYGE benchmark. We demonstrate that our techniques successfully circumvent safeguards in six state-of-the-art language models, including GPT-4o and LLaMA 3.2. Our findings highlight critical weaknesses in current LLM safety alignments and underscore the urgent need for more sophisticated defence strategies. Warning: this paper contains examples of unethical inquiries used solely for research purposes.
Updated: 2025-05-31 11:52:11
标题: 冰山尖端:揭示一类隐藏的LLM任务中提示对抗攻击
摘要: 我们提出一种新颖的破解对抗攻击LLMs的方法,称为任务提示(TIP)攻击。我们的方法将序列到序列任务(例如,密码解码、谜语、代码执行)嵌入到模型提示中,间接生成禁止的输入。为了系统评估这些攻击的有效性,我们引入了PHRYGE基准测试。我们展示了我们的技术成功地规避了六种最先进的语言模型中的防护措施,包括GPT-4o和LLaMA 3.2。我们的发现突出了当前LLM安全对齐中的关键弱点,并强调了更复杂的防御策略的迫切需要。 警告:本文包含仅用于研究目的的不道德查询示例。
更新时间: 2025-05-31 11:52:11
领域: cs.CR,cs.AI,cs.CL
Robust and Verifiable MPC with Applications to Linear Machine Learning Inference
In this work, we present an efficient secure multi-party computation MPC protocol that provides strong security guarantees in settings with dishonest majority of participants who may behave arbitrarily. Unlike the popular MPC implementation known as SPDZ [Crypto '12], which only ensures security with abort, our protocol achieves both complete identifiability and robustness. With complete identifiability, honest parties can detect and unanimously agree on the identity of any malicious party. Robustness allows the protocol to continue with the computation without requiring a restart, even when malicious behavior is detected. Additionally, our approach addresses the performance limitations observed in the protocol by Cunningham et al. [ICITS '17], which, while achieving complete identifiability, is hindered by the costly exponentiation operations required by the choice of commitment scheme. Our protocol is based on the approach by Rivinius et al. [S&P '22], utilizing lattice-based commitment for better efficiency. We achieved robustness with the help of a semi-honest trusted third party. We benchmark our robust protocol, showing the efficient recovery from parties' malicious behavior. Finally, we benchmark our protocol on a ML-as-a-service scenario, wherein clients off-load the desired computation to the servers, and verify the computation result. We benchmark on linear ML inference, running on various datasets. While our efficiency is slightly lower compared to SPDZ's, we offer stronger security properties that provide distinct advantages.
Updated: 2025-05-31 11:26:57
标题: 具有鲁棒性和可验证性的MPC及其在线性机器学习推断中的应用
摘要: 在这项工作中,我们提出了一种高效的安全多方计算MPC协议,该协议在存在可能表现任意的不诚实多数参与者的情况下提供强大的安全保障。与仅能确保有中止安全的流行MPC实现SPDZ[Crypto '12]不同,我们的协议实现了完全可识别性和鲁棒性。通过完全可识别性,诚实方可以检测并一致同意任何恶意方的身份。鲁棒性允许协议在检测到恶意行为时继续计算而无需重新启动。此外,我们的方法解决了Cunningham等人的协议[ICITS '17]中观察到的性能限制,该协议虽然实现了完全可识别性,但由于选择承诺方案需要昂贵的指数运算而受到阻碍。 我们的协议基于Rivinius等人的方法[S&P '22],利用基于格的承诺以提高效率。通过半诚实的可信第三方的帮助,我们实现了鲁棒性。我们对我们的鲁棒协议进行基准测试,展示了从参与者恶意行为中高效恢复的能力。 最后,我们在ML作为服务场景下对我们的协议进行基准测试,在该场景中,客户将所需计算卸载到服务器,并验证计算结果。我们在各种数据集上运行线性ML推断进行基准测试。虽然我们的效率略低于SPDZ的效率,但我们提供了更强大的安全性属性,提供了明显的优势。
更新时间: 2025-05-31 11:26:57
领域: cs.CR
Scaling DeFi with ZK Rollups: Design, Deployment, and Evaluation of a Real-Time Proof-of-Concept
Ethereum's scalability limitations pose significant challenges for the adoption of decentralized applications (dApps). Zero-Knowledge Rollups (ZK Rollups) present a promising solution, bundling transactions off-chain and submitting validity proofs on-chain to enhance throughput and efficiency. In this work, we examine the technical underpinnings of ZK Rollups and stress test their performance in real-world applications in decentralized finance (DeFi). We set up a proof-of-concept (PoC) consisting of ZK rollup and decentralized exchange, and implement load balancer generating token swaps. Our results show that the rollup can process up to 71 swap transactions per second, compared to 12 general transaction by Ethereum. We further analyze transaction finality trade-offs with related security concerns, and discuss the future directions for integrating ZK Rollups into Ethereum's broader ecosystem.
Updated: 2025-05-31 10:39:24
标题: 使用ZK Rollups扩展DeFi:实时概念验证的设计、部署和评估
摘要: 以太坊的可扩展性限制对去中心化应用程序(dApps)的采用提出了重大挑战。零知识Rollups(ZK Rollups)提供了一个有前途的解决方案,将交易捆绑在链下并在链上提交有效性证明,以增强吞吐量和效率。在这项工作中,我们研究了ZK Rollups的技术基础,并在去中心化金融(DeFi)的实际应用中对其性能进行了压力测试。我们建立了一个由ZK Rollup和去中心化交易所组成的概念验证(PoC),并实现了生成代币交换的负载平衡器。我们的结果显示,与以太坊的12笔常规交易相比,Rollup可以每秒处理高达71笔交换交易。我们进一步分析了与相关安全问题相关的交易最终性的权衡,并讨论了将ZK Rollups整合到以太坊更广泛生态系统的未来方向。
更新时间: 2025-05-31 10:39:24
领域: cs.CR
BDPFL: Backdoor Defense for Personalized Federated Learning via Explainable Distillation
Federated learning is a distributed learning paradigm that facilitates the collaborative training of a global model across multiple clients while preserving the privacy of local datasets. To address inherent challenges related to data heterogeneity and satisfy personalized needs, a new direction within FL, known as personalized Federated Learning (pFL), has gradually evolved. Extensive attention has been directed toward developing novel frameworks and methods to enhance the performance of pFL. Regrettably, the aspect of security in pFL has been largely overlooked. Our objective is to fill this gap. Similar to FL, pFL is susceptible to backdoor attacks. However, existing backdoor defense strategies are primarily tailored to general FL frameworks, and pFL lacks robustness against backdoor attacks. We propose a novel, backdoor-robust pFL framework named BDPFL to address these challenges. First, BDPFL introduces layer-wise mutual distillation that enables clients to learn their personalized local models while mitigating potential backdoors. Then, BDPFL employs explanation heatmap to learn high-quality intermediate representations and enhance the effect of eliminating deeper and more entrenched backdoors. Moreover, we perform empirical evaluations of BDPFL's performance on three datasets and compare BDPFL with four backdoor defense methods. The experiments demonstrate that BDPFL outperforms baseline methods and is effective under various settings.
Updated: 2025-05-31 10:10:47
标题: BDPFL:通过可解释蒸馏实现个性化联邦学习的后门防御
摘要: 联邦学习是一种分布式学习范式,促进了跨多个客户端协同训练全局模型,同时保护本地数据集的隐私。为了解决与数据异质性相关的固有挑战并满足个性化需求,一个新的方向在联邦学习中逐渐发展,被称为个性化联邦学习(pFL)。人们已经广泛关注开发新的框架和方法,以提高pFL的性能。遗憾的是,pFL中的安全性方面被大多数人忽视了。我们的目标是填补这一空白。与联邦学习类似,pFL容易受到后门攻击。然而,现有的后门防御策略主要针对通用联邦学习框架,而pFL缺乏对后门攻击的强大防御能力。我们提出了一种新颖的、具有后门鲁棒性的pFL框架,名为BDPFL,以解决这些挑战。首先,BDPFL引入了层次互相蒸馏,使客户端能够学习其个性化本地模型,同时减轻潜在的后门风险。然后,BDPFL采用解释热图来学习高质量的中间表示,并增强消除更深层和更根深蒂固的后门的效果。此外,我们对BDPFL在三个数据集上的表现进行了实证评估,并将BDPFL与四种后门防御方法进行了比较。实验证明,BDPFL优于基线方法,并且在各种设置下都是有效的。
更新时间: 2025-05-31 10:10:47
领域: cs.CR
Bridging the Gap between Hardware Fuzzing and Industrial Verification
As hardware design complexity increases, hardware fuzzing emerges as a promising tool for automating the verification process. However, a significant gap still exists before it can be applied in industry. This paper aims to summarize the current progress of hardware fuzzing from an industry-use perspective and propose solutions to bridge the gap between hardware fuzzing and industrial verification. First, we review recent hardware fuzzing methods and analyze their compatibilities with industrial verification. We establish criteria to assess whether a hardware fuzzing approach is compatible. Second, we examine whether current verification tools can efficiently support hardware fuzzing. We identify the bottlenecks in hardware fuzzing performance caused by insufficient support from the industrial environment. To overcome the bottlenecks, we propose a prototype, HwFuzzEnv, providing the necessary support for hardware fuzzing. With this prototype, the previous hardware fuzzing method can achieve a several hundred times speedup in industrial settings. Our work could serve as a reference for EDA companies, encouraging them to enhance their tools to support hardware fuzzing efficiently in industrial verification.
Updated: 2025-05-31 08:26:19
标题: 弥合硬件模糊测试与工业验证之间的差距
摘要: 随着硬件设计复杂性的增加,硬件模糊测试作为自动化验证过程的一种有前途的工具应运而生。然而,在它能够在工业领域应用之前,仍然存在着重大差距。本文旨在从工业应用的角度总结硬件模糊测试的当前进展,并提出解决方案来弥合硬件模糊测试和工业验证之间的差距。首先,我们回顾了最近的硬件模糊测试方法,并分析它们与工业验证的兼容性。我们建立了评估硬件模糊测试方法是否兼容的标准。其次,我们检查当前的验证工具是否能够有效支持硬件模糊测试。我们确定了由于工业环境对硬件模糊测试性能支持不足而导致的瓶颈。为了克服这些瓶颈,我们提出了一个原型,名为HwFuzzEnv,为硬件模糊测试提供必要的支持。有了这个原型,先前的硬件模糊测试方法在工业环境中可以实现数百倍的加速。我们的工作可以作为EDA公司的参考,鼓励他们改进工具以有效支持硬件模糊测试在工业验证中的应用。
更新时间: 2025-05-31 08:26:19
领域: cs.CR,cs.AR
WET: Overcoming Paraphrasing Vulnerabilities in Embeddings-as-a-Service with Linear Transformation Watermarks
Embeddings-as-a-Service (EaaS) is a service offered by large language model (LLM) developers to supply embeddings generated by LLMs. Previous research suggests that EaaS is prone to imitation attacks -- attacks that clone the underlying EaaS model by training another model on the queried embeddings. As a result, EaaS watermarks are introduced to protect the intellectual property of EaaS providers. In this paper, we first show that existing EaaS watermarks can be removed by paraphrasing when attackers clone the model. Subsequently, we propose a novel watermarking technique that involves linearly transforming the embeddings, and show that it is empirically and theoretically robust against paraphrasing.
Updated: 2025-05-31 08:16:14
标题: WET: 使用线性转换水印克服嵌入式服务中的释义漏洞
摘要: Embeddings-as-a-Service(EaaS)是由大型语言模型(LLM)开发人员提供的服务,用于提供由LLMs生成的嵌入。先前的研究表明,EaaS容易受到模仿攻击的影响 - 即利用查询的嵌入训练另一个模型来克隆底层的EaaS模型。因此,引入了EaaS水印来保护EaaS提供者的知识产权。在本文中,我们首先展示了现有的EaaS水印在攻击者克隆模型时可以通过改写来移除。随后,我们提出了一种涉及线性转换嵌入的新型水印技术,并展示了在实践和理论上它对改写是稳健的。
更新时间: 2025-05-31 08:16:14
领域: cs.CR,cs.CL,cs.LG
Practical Adversarial Attacks on Stochastic Bandits via Fake Data Injection
Adversarial attacks on stochastic bandits have traditionally relied on some unrealistic assumptions, such as per-round reward manipulation and unbounded perturbations, limiting their relevance to real-world systems. We propose a more practical threat model, Fake Data Injection, which reflects realistic adversarial constraints: the attacker can inject only a limited number of bounded fake feedback samples into the learner's history, simulating legitimate interactions. We design efficient attack strategies under this model, explicitly addressing both magnitude constraints (on reward values) and temporal constraints (on when and how often data can be injected). Our theoretical analysis shows that these attacks can mislead both Upper Confidence Bound (UCB) and Thompson Sampling algorithms into selecting a target arm in nearly all rounds while incurring only sublinear attack cost. Experiments on synthetic and real-world datasets validate the effectiveness of our strategies, revealing significant vulnerabilities in widely used stochastic bandit algorithms under practical adversarial scenarios.
Updated: 2025-05-31 07:08:47
标题: 通过伪造数据注入对随机波动机器实施实用的对抗攻击
摘要: 对随机赌博机的对抗攻击传统上依赖于一些不现实的假设,例如每轮奖励操纵和无界扰动,限制了它们与现实世界系统的相关性。我们提出了一个更实用的威胁模型,即虚假数据注入,反映了现实的对抗性约束:攻击者只能向学习者的历史中注入有限数量的有界虚假反馈样本,模拟合法互动。我们在这个模型下设计了高效的攻击策略,明确解决了奖励值的幅度约束和数据注入的时间约束。我们的理论分析表明,这些攻击可以误导上限置信界(UCB)和汤普森采样算法在几乎所有轮次中选择目标手臂,同时仅产生亚线性攻击成本。对合成和真实数据集的实验验证了我们策略的有效性,揭示了在实际对抗情景下广泛使用的随机赌博机算法存在重大漏洞。
更新时间: 2025-05-31 07:08:47
领域: cs.LG,cs.AI,cs.CR
Hybrid Cloud Security: Balancing Performance, Cost, and Compliance in Multi-Cloud Deployments
The pervasive use of hybrid cloud computing models has changed enterprise as well as Information Technology services infrastructure by giving businesses simple and cost-effective options of combining on-premise IT equipment with public cloud services. hybrid cloud solutions deploy multifaceted models of security, performance optimization, and cost efficiency, conventionally fragmented in the cloud computing milieu. This paper examines how organizations manage these parameters in hybrid cloud ecosystems while providing solutions to the challenges they face in operationalizing hybrid cloud adoptions. The study captures the challenges of achieving a balance in resource distribution between on-premise and cloud resources (herein referred to as the "resource allocation challenge"), the complexity of pricing models from cloud providers like AWS, Microsoft Azure, Google Cloud (herein called the 'pricing complexity problem'), and the urgency for strong security infrastructure to safeguard sensitive information (known as 'the information security problem'). This study demonstrates the security and performance management solutions proposed were validated in a detailed case study of adoption of AWS and Azure based hybrid cloud and provides useful guidance. Also, a hybrid cloud security and cost optimization framework based on zero trust architecture, encryption, hybrid cloud policies, and others, is proposed. The conclusion includes recommendations for research on automation of hybrid cloud service management, integration of multi-clouds, and the ever-present question of data privacy, stressing how those matters affect contemporary enterprises.
Updated: 2025-05-31 07:04:08
标题: 混合云安全:在多云部署中平衡性能、成本和合规性
摘要: 混合云计算模型的广泛应用已经改变了企业以及信息技术服务基础设施,为企业提供了将本地IT设备与公共云服务结合的简单且具有成本效益的选择。混合云解决方案部署了多方面的安全性模型、性能优化和成本效益,传统上在云计算领域中是分散的。本文研究了组织在混合云生态系统中如何管理这些参数,同时提供了解决他们在操作混合云采用中所面临挑战的解决方案。该研究捕捉了在本地和云资源之间实现资源分配平衡的挑战(在此称为“资源分配挑战”),来自云提供商如AWS、微软Azure、Google Cloud的定价模型复杂性(在此称为“定价复杂性问题”),以及迫切需要强大的安全基础设施来保护敏感信息(称为“信息安全问题”)。这项研究表明,提出的安全性和性能管理解决方案在采用基于AWS和Azure的混合云的详细案例研究中得到了验证,并提供了有用的指导。此外,提出基于零信任架构、加密、混合云政策等的混合云安全和成本优化框架。 结论包括对混合云服务管理自动化、多云集成以及数据隐私永远存在的问题的研究建议,强调这些问题如何影响当代企业。
更新时间: 2025-05-31 07:04:08
领域: cs.CR
Blockchain Powered Edge Intelligence for U-Healthcare in Privacy Critical and Time Sensitive Environment
Edge Intelligence (EI) serves as a critical enabler for privacy-preserving systems by providing AI-empowered computation and distributed caching services at the edge, thereby minimizing latency and enhancing data privacy. The integration of blockchain technology further augments EI frameworks by ensuring transactional transparency, auditability, and system-wide reliability through a decentralized network model. However, the operational architecture of such systems introduces inherent vulnerabilities, particularly due to the extensive data interactions between edge gateways (EGs) and the distributed nature of information storage during service provisioning. To address these challenges, we propose an autonomous computing model along with its interaction topologies tailored for privacy-critical and time-sensitive health applications. The system supports continuous monitoring, real-time alert notifications, disease detection, and robust data processing and aggregation. It also includes a data transaction handler and mechanisms for ensuring privacy at the EGs. Moreover, a resource-efficient one-dimensional convolutional neural network (1D-CNN) is proposed for the multiclass classification of arrhythmia, enabling accurate and real-time analysis of constrained EGs. Furthermore, a secure access scheme is defined to manage both off-chain and on-chain data sharing and storage. To validate the proposed model, comprehensive security, performance, and cost analyses are conducted, demonstrating the efficiency and reliability of the fine-grained access control scheme.
Updated: 2025-05-31 06:58:52
标题: 区块链驱动的边缘智能在隐私关键和时间敏感环境中的U-Healthcare
摘要: 边缘智能(EI)作为隐私保护系统的关键推动者,通过在边缘提供AI增强的计算和分布式缓存服务,从而最大程度地减少延迟并增强数据隐私。区块链技术的整合进一步增强了EI框架,通过分散网络模型确保交易透明度、可审计性和系统整体可靠性。然而,这种系统的运行架构引入了固有的脆弱性,特别是由于边缘网关(EGs)之间的大量数据交互和服务提供过程中信息存储的分布性。为了解决这些挑战,我们提出了一种针对隐私关键和时间敏感的健康应用定制的自主计算模型及其交互拓扑。该系统支持持续监测、实时警报通知、疾病检测以及强大的数据处理和聚合。它还包括数据交易处理程序和确保EGs隐私的机制。此外,提出了一种资源高效的一维卷积神经网络(1D-CNN)用于心律失常的多类别分类,实现了受限EGs的准确和实时分析。此外,定义了一种安全访问方案,用于管理链下和链上数据共享和存储。为验证所提出的模型,进行了全面的安全性、性能和成本分析,展示了细粒度访问控制方案的效率和可靠性。
更新时间: 2025-05-31 06:58:52
领域: cs.CR,cs.LG
Teaching an Old LLM Secure Coding: Localized Preference Optimization on Distilled Preferences
LLM generated code often contains security issues. We address two key challenges in improving secure code generation. First, obtaining high quality training data covering a broad set of security issues is critical. To address this, we introduce a method for distilling a preference dataset of insecure and secure code pairs from frontier LLMs, along with a security reasoning that explains the issues and the fix. The key idea here is to make use of security knowledge sources to devise a systematic prompting strategy that ensures broad coverage. Second, aligning models to secure code requires focusing on localized regions of code. Direct preference optimization methods, like SimPO, are not designed to handle these localized differences and turn out to be ineffective. We address this with a new localized preference optimization algorithm that masks the security related tokens in both the winning (secure) and losing (insecure) responses. To prevent loss in code quality, we also add a regularizer. Evaluations show that both training on our dataset, DiSCo, and the new preference optimization algorithm, LPO, yield substantial reductions in code insecurity while also improving overall code quality. Code and dataset are available at https://github.com/StonyBrookNLP/disco-lpo.
Updated: 2025-05-31 06:48:12
标题: 教授老的LLM安全编码:对精炼偏好的本地化优化
摘要: LLM 生成的代码通常存在安全问题。我们解决了改进安全代码生成的两个关键挑战。首先,获取涵盖广泛安全问题集合的高质量训练数据至关重要。为了解决这个问题,我们引入了一种方法,从前沿 LLM 中提炼一组不安全和安全代码对的偏好数据集,同时提供一个安全推理来解释问题和修复方法。关键思想是利用安全知识源设计一个系统提示策略,确保广泛覆盖。其次,将模型对齐到安全代码需要关注代码的局部区域。直接的偏好优化方法,如 SimPO,并不适用于处理这些局部差异,结果表明效果不佳。我们通过一种新的局部偏好优化算法来解决这个问题,该算法在获胜(安全)和失利(不安全)响应中掩盖了与安全相关的标记。为了防止代码质量下降,我们还添加了一个正则化器。评估表明,使用我们的数据集 DiSCo 进行训练和新的偏好优化算法 LPO,可以显著减少代码不安全性,并改善整体代码质量。代码和数据集可在 https://github.com/StonyBrookNLP/disco-lpo 获取。
更新时间: 2025-05-31 06:48:12
领域: cs.CR,cs.SE
Blockchain-Enabled Privacy-Preserving Second-Order Federated Edge Learning in Personalized Healthcare
Federated learning (FL) has attracted increasing attention to mitigate security and privacy challenges in traditional cloud-centric machine learning models specifically in healthcare ecosystems. FL methodologies enable the training of global models through localized policies, allowing independent operations at the edge clients' level. Conventional first-order FL approaches face several challenges in personalized model training due to heterogeneous non-independent and identically distributed (non-iid) data of each edge client. Recently, second-order FL approaches maintain the stability and consistency of non-iid datasets while improving personalized model training. This study proposes and develops a verifiable and auditable optimized second-order FL framework BFEL (blockchain-enhanced federated edge learning) based on optimized FedCurv for personalized healthcare systems. FedCurv incorporates information about the importance of each parameter to each client's task (through Fisher Information Matrix) which helps to preserve client-specific knowledge and reduce model drift during aggregation. Moreover, it minimizes communication rounds required to achieve a target precision convergence for each edge client while effectively managing personalized training on non-iid and heterogeneous data. The incorporation of Ethereum-based model aggregation ensures trust, verifiability, and auditability while public key encryption enhances privacy and security. Experimental results of federated CNNs and MLPs utilizing Mnist, Cifar-10, and PathMnist demonstrate the high efficiency and scalability of the proposed framework.
Updated: 2025-05-31 06:41:04
标题: 区块链技术支持的隐私保护的个性化医疗边缘二阶联邦学习
摘要: 联邦学习(FL)已引起越来越多的关注,以缓解传统云中心机器学习模型在医疗生态系统中的安全和隐私挑战。FL方法使全局模型通过本地政策进行训练,允许在边缘客户端级别进行独立操作。传统的一阶FL方法在个性化模型训练中面临几个挑战,因为每个边缘客户端的数据是异构的、非独立分布的。最近,二阶FL方法在改进个性化模型训练的同时,保持了非独立分布数据集的稳定性和一致性。本研究提出并开发了一个可验证和可审计的优化二阶FL框架BFEL(基于优化FedCurv的区块链增强联邦边缘学习),用于个性化医疗系统。FedCurv整合了关于每个参数对每个客户任务重要性的信息(通过费舍尔信息矩阵),有助于保留客户特定知识并减少在聚合过程中的模型漂移。此外,它最小化了为每个边缘客户实现目标精度收敛所需的通信轮次,同时有效管理非独立分布和异构数据的个性化训练。基于以太坊的模型聚合确保了信任、可验证性和可审计性,而公钥加密增强了隐私和安全性。利用Mnist、Cifar-10和PathMnist的联邦CNN和MLP的实验结果展示了所提出框架的高效性和可扩展性。
更新时间: 2025-05-31 06:41:04
领域: cs.LG,cs.CR,stat.ML
Adaptive and Efficient Dynamic Memory Management for Hardware Enclaves
The second version of Intel Software Guard Extensions (Intel SGX), or SGX2, adds dynamic management of enclave memory and threads. The first version required the address space and thread counts to be fixed before execution. The Enclave Dynamic Memory Management (EDMM) feature of SGX2 has the potential to lower launch times and overall execution time. Despite reducing the enclave loading time by 28--93%, straightforward EDMM adoption strategies actually slow execution time down by as much as 58%. Using the Gramine library OS as a representative enclave runtime environment, this paper shows how to recover EDMM performance. The paper explains how implementing mutual distrust between the OS and enclave increases the cost of modifying page mappings. The paper then describes and evaluates a series of optimizations on application benchmarks, showing that these optimizations effectively eliminate the overheads of EDMM while retaining EDMM's performance and flexibility gains.
Updated: 2025-05-31 06:04:38
标题: 硬件飞地的自适应高效动态内存管理
摘要: Intel软件保护扩展(Intel SGX)的第二个版本,即SGX2,增加了对飞地内存和线程的动态管理。第一个版本要求在执行之前固定地址空间和线程计数。SGX2的Enclave动态内存管理(EDMM)功能有潜力降低启动时间和整体执行时间。尽管通过简单的EDMM采用策略将飞地加载时间减少了28-93%,但实际上会使执行时间减慢高达58%。本文以Gramine库OS作为代表性的飞地运行时环境,展示了如何恢复EDMM性能。本文解释了在OS和飞地之间实施互不信任如何增加修改页面映射的成本。然后描述并评估了一系列优化在应用基准上的效果,显示这些优化有效地消除了EDMM的开销,同时保留了EDMM的性能和灵活性收益。
更新时间: 2025-05-31 06:04:38
领域: cs.OS,cs.CR
PADetBench: Towards Benchmarking Physical Attacks against Object Detection
Physical attacks against object detection have gained increasing attention due to their significant practical implications. However, conducting physical experiments is extremely time-consuming and labor-intensive. Moreover, physical dynamics and cross-domain transformation are challenging to strictly regulate in the real world, leading to unaligned evaluation and comparison, severely hindering the development of physically robust models. To accommodate these challenges, we explore utilizing realistic simulation to thoroughly and rigorously benchmark physical attacks with fairness under controlled physical dynamics and cross-domain transformation. This resolves the problem of capturing identical adversarial images that cannot be achieved in the real world. Our benchmark includes 20 physical attack methods, 48 object detectors, comprehensive physical dynamics, and evaluation metrics. We also provide end-to-end pipelines for dataset generation, detection, evaluation, and further analysis. In addition, we perform 8064 groups of evaluation based on our benchmark, which includes both overall evaluation and further detailed ablation studies for controlled physical dynamics. Through these experiments, we provide in-depth analyses of physical attack performance and physical adversarial robustness, draw valuable observations, and discuss potential directions for future research. Codebase: https://github.com/JiaweiLian/Benchmarking_Physical_Attack
Updated: 2025-05-31 06:03:18
标题: PADetBench:面向物体检测的物理攻击基准测试
摘要: 对物体检测的物理攻击引起了越来越多的关注,因为它们在实践中具有重要的意义。然而,进行物理实验非常耗时且劳动密集。此外,在现实世界中,物理动态和跨领域转换具有挑战性,难以严格调控,导致评估和比较不一致,严重阻碍了物理鲁棒模型的发展。为了解决这些挑战,我们探索利用逼真的模拟,在受控的物理动态和跨领域转换下,全面且严格地评估物理攻击,实现公平。这解决了无法在现实世界中实现相同对抗性图像的问题。我们的评估包括20种物理攻击方法,48种目标检测器,全面的物理动态和评估指标。我们还提供了数据集生成、检测、评估和进一步分析的端到端流程。此外,我们基于我们的评估进行了8064组评估,包括整体评估和受控物理动态的进一步详细消融研究。通过这些实验,我们提供了物理攻击性能和物理对抗鲁棒性的深入分析,得出了有价值的观察结果,并讨论了未来研究的潜在方向。 代码库:https://github.com/JiaweiLian/Benchmarking_Physical_Attack
更新时间: 2025-05-31 06:03:18
领域: cs.CV,cs.CR,cs.LG
Adversarial Machine Learning for Robust Password Strength Estimation
Passwords remain one of the most common methods for securing sensitive data in the digital age. However, weak password choices continue to pose significant risks to data security and privacy. This study aims to solve the problem by focusing on developing robust password strength estimation models using adversarial machine learning, a technique that trains models on intentionally crafted deceptive passwords to expose and address vulnerabilities posed by such passwords. We apply five classification algorithms and use a dataset with more than 670,000 samples of adversarial passwords to train the models. Results demonstrate that adversarial training improves password strength classification accuracy by up to 20% compared to traditional machine learning models. It highlights the importance of integrating adversarial machine learning into security systems to enhance their robustness against modern adaptive threats. Keywords: adversarial attack, password strength, classification, machine learning
Updated: 2025-05-31 03:54:04
标题: 对抗性机器学习用于强密码强度估计
摘要: 密码仍然是数字时代保护敏感数据最常见的方法之一。然而,弱密码选择继续对数据安全和隐私构成重大风险。本研究旨在通过专注于利用对抗机器学习开发强大的密码强度估计模型来解决这一问题,这种技术通过对故意制作的欺骗性密码进行训练,以暴露并解决由这些密码引起的漏洞。我们应用了五种分类算法,并使用了一个包含超过67万个对抗密码样本的数据集来训练模型。结果表明,与传统机器学习模型相比,对抗训练可将密码强度分类准确性提高多达20%。它强调了将对抗机器学习整合到安全系统中以增强其对现代自适应威胁的健壮性的重要性。关键词:对抗攻击,密码强度,分类,机器学习
更新时间: 2025-05-31 03:54:04
领域: cs.CR
Keeping an Eye on LLM Unlearning: The Hidden Risk and Remedy
Although Large Language Models (LLMs) have demonstrated impressive capabilities across a wide range of tasks, growing concerns have emerged over the misuse of sensitive, copyrighted, or harmful data during training. To address these concerns, unlearning techniques have been developed to remove the influence of specific data without retraining from scratch. However, this paper reveals a critical vulnerability in fine-tuning-based unlearning: a malicious user can craft a manipulated forgetting request that stealthily degrades the model's utility for benign users. We demonstrate this risk through a red-teaming Stealthy Attack (SA), which is inspired by two key limitations of existing unlearning (the inability to constrain the scope of unlearning effect and the failure to distinguish benign tokens from unlearning signals). Prior work has shown that unlearned models tend to memorize forgetting data as unlearning signals, and respond with hallucinations or feigned ignorance when unlearning signals appear in the input. By subtly increasing the presence of common benign tokens in the forgetting data, SA enhances the connection between benign tokens and unlearning signals. As a result, when normal users include such tokens in their prompts, the model exhibits unlearning behaviors, leading to unintended utility degradation. To address this vulnerability, we propose Scope-aware Unlearning (SU), a lightweight enhancement that introduces a scope term into the unlearning objective, encouraging the model to localize the forgetting effect. Our method requires no additional data processing, integrates seamlessly with existing fine-tuning frameworks, and significantly improves robustness against SA. Extensive experiments validate the effectiveness of both SA and SU.
Updated: 2025-05-31 02:57:24
标题: 监控LLM去学习:隐藏的风险和补救措施
摘要: 尽管大型语言模型(LLMs)在广泛任务中展现出令人印象深刻的能力,但日益增长的关注集中在训练过程中对敏感、受版权保护或有害数据的滥用上。为了解决这些问题,已经开发了去学习技术,以去除特定数据的影响,而无需从头开始重新训练。然而,本文揭示了基于微调的去学习中的一个关键漏洞:恶意用户可以制作一个操纵性的遗忘请求,悄悄地降低模型对善意用户的效用。我们通过一个红队隐秘攻击(SA)来展示这一风险,该攻击受到现有去学习的两个关键限制的启发(无法限制去学习效果的范围和无法区分善意标记和去学习信号)。先前的研究表明,被去学习的模型往往会将遗忘数据作为去学习信号进行记忆,并在输入中出现去学习信号时会产生幻觉或假装无知。通过在遗忘数据中微妙增加常见的善意标记,SA增强了善意标记和去学习信号之间的联系。因此,当普通用户在他们的提示中包含这些标记时,模型表现出去学习行为,导致意外的效用降低。为了解决这一漏洞,我们提出了Scope-aware Unlearning(SU),这是一种轻量级增强,将一个范围项引入到去学习目标中,鼓励模型定位忘记效果。我们的方法不需要额外的数据处理,与现有微调框架无缝集成,并在抵御SA方面显著提高了鲁棒性。大量实验证实了SA和SU的有效性。
更新时间: 2025-05-31 02:57:24
领域: cs.CR
dpmm: Differentially Private Marginal Models, a Library for Synthetic Tabular Data Generation
We propose dpmm, an open-source library for synthetic data generation with Differentially Private (DP) guarantees. It includes three popular marginal models -- PrivBayes, MST, and AIM -- that achieve superior utility and offer richer functionality compared to alternative implementations. Additionally, we adopt best practices to provide end-to-end DP guarantees and address well-known DP-related vulnerabilities. Our goal is to accommodate a wide audience with easy-to-install, highly customizable, and robust model implementations. Our codebase is available from https://github.com/sassoftware/dpmm.
Updated: 2025-05-31 00:23:05
标题: dpmm:差分隐私边际模型,一个用于合成表格数据生成的库
摘要: 我们提出了dpmm,这是一个用于生成具有差分隐私(DP)保证的合成数据的开源库。它包括三种流行的边缘模型-- PrivBayes、MST和AIM--它们比替代实现实现了更优越的效用,并提供了更丰富的功能。此外,我们采用最佳实践提供端到端的DP保证,并解决了众所周知的与DP相关的漏洞。我们的目标是为广泛的受众提供易于安装、高度可定制和稳健的模型实现。 我们的代码库可在https://github.com/sassoftware/dpmm 上找到。
更新时间: 2025-05-31 00:23:05
领域: cs.CR,cs.AI,cs.LG
Asymmetry by Design: Boosting Cyber Defenders with Differential Access to AI
As AI-enabled cyber capabilities become more advanced, we propose "differential access" as a strategy to tilt the cybersecurity balance toward defense by shaping access to these capabilities. We introduce three possible approaches that form a continuum, becoming progressively more restrictive for higher-risk capabilities: Promote Access, Manage Access, and Deny by Default. However, a key principle across all approaches is the need to prioritize defender access, even in the most restrictive scenarios, so that defenders can prepare for adversaries gaining access to similar capabilities. This report provides a process to help frontier AI developers choose and implement one of the three differential access approaches, including considerations based on a model's cyber capabilities, a defender's maturity and role, and strategic and technical implementation details. We also present four example schemes for defenders to reference, demonstrating how differential access provides value across various capability and defender levels, and suggest directions for further research.
Updated: 2025-05-31 00:20:22
标题: 设计不对称性:通过差异化AI访问提升网络防御者
摘要: 随着AI技术在网络安全领域的应用变得越来越先进,我们提出“差异化访问”作为一种策略,以通过塑造对这些能力的访问来倾斜网络安全平衡向防御方向。我们介绍了三种可能的方法,形成一个连续体系,对于更高风险的能力,这些方法逐渐变得更为严格:促进访问、管理访问和默认拒绝。然而,所有方法中的一个关键原则是需要优先考虑防御者的访问,即使在最严格的情况下,这样防御者可以为对手获得类似能力做准备。本报告提供了一个过程,帮助AI开发者选择并实施这三种差异化访问方法之一,包括基于模型的网络能力、防御者的成熟度和角色,以及战略和技术实施细节的考虑。我们还提供了四个防御者可以参考的示例方案,展示了差异化访问如何在不同的能力和防御者水平上提供价值,并建议进一步研究方向。
更新时间: 2025-05-31 00:20:22
领域: cs.CR,cs.CY,K.4.1
Local Frames: Exploiting Inherited Origins to Bypass Content Blockers
We present a study of how local frames (i.e., iframes with non-URL sources like "about:blank") are mishandled by a wide range of popular Web security and privacy tools. As a result, users of these tools remain vulnerable to the very attack techniques they seek to protect against, including browser fingerprinting, cookie-based tracking, and data exfiltration. The tools we study are vulnerable in different ways, but all share a root cause: legacy Web functionality interacting with browser privacy boundaries in unexpected ways, leading to systemic vulnerabilities in tools developed, maintained, and recommended by privacy experts and activists. We consider four core capabilities supported by most privacy tools and develop tests to determine whether each can be evaded through the use of local frames. We apply our tests to six popular Web privacy and security tools, identifying at least one vulnerability in each for a total of 19, and extract common patterns regarding their mishandling of local frames. Our measurement of popular websites finds that 56% employ local frames and that 73.7% of the requests made by these local frames should be blocked by popular filter lists but instead trigger the vulnerabilities we identify; from another perspective, 14.3% of all sites that we crawl make requests that should be blocked inside of local frames. We disclosed the vulnerabilities to the tool authors and discuss both our experiences working with them to patch their products and the implications of our findings for other privacy and security research.
Updated: 2025-05-31 00:07:24
标题: 本地框架:利用继承的起源来绕过内容阻挡器
摘要: 我们提出了一个研究,探讨了当本地框架(即包含非URL来源,如"about:blank")被广泛使用的网络安全和隐私工具误处理时会发生什么。结果,这些工具的用户仍然容易受到他们试图保护的攻击技术的影响,包括浏览器指纹识别、基于cookie的跟踪和数据外泄。我们研究的工具存在不同的漏洞,但都有一个共同的根本原因:传统的网络功能与浏览器隐私界限以意想不到的方式互动,导致由隐私专家和活动人士开发、维护和推荐的工具中存在系统性漏洞。 我们考虑了大多数隐私工具支持的四个核心功能,并开发了测试来确定每个功能是否可以通过使用本地框架来规避。我们将测试应用于六种流行的网络隐私和安全工具,发现每种工具中至少存在一个漏洞,总共达到19个,并提取了关于它们对本地框架的误处理的共同模式。我们测量了流行网站的数据,发现56%的网站使用本地框架,而这些本地框架发出的73.7%的请求应该被流行的过滤列表阻止,但实际上触发了我们识别的漏洞;换个角度看,我们爬取的所有网站中有14.3%发出的请求应该在本地框架内被阻止。我们向工具作者披露了这些漏洞,并讨论了我们与他们合作修补产品的经验,以及我们的研究结果对其他隐私和安全研究的影响。
更新时间: 2025-05-31 00:07:24
领域: cs.CR