AI tools speed routine coding and prototyping at Netlight but create errors, context gaps and integration costs that increase the value of human supervision and orchestration. Firms that build effective orchestration and tooling layers — and hire for system-design and coordination skills — are likely to capture disproportionate productivity gains.
As IT development is reshaped by artificial intelligence, IT professionals are beginning to create IT products in new ways.Drawing on our study of experienced workers at Netlight Consulting GmbH, a Swedish-founded, internationally operating IT consultancy, this article shows how IT professionals now use AI within their IT development activities.It also identifies the difficulties and limitations that IT professionals face and explains how the looming rise of AI "orchestras" will transform how humans and AI build IT together.
Summary
Main Finding
Experienced IT professionals at Netlight are already integrating AI tools into everyday development work, changing how tasks are done rather than replacing developers outright. However, current AI capabilities create new frictions (quality, context, coordination, governance) that make human roles of supervision, orchestration, and integration more valuable. The authors argue we are moving toward “AI orchestras” — multilayered systems of specialized AI components coordinated by humans — which will reshape work, firm organization, and the economics of IT production.
Key Points
- AI is being used as a practical assistant in coding, debugging, prototyping, and knowledge retrieval rather than as a fully autonomous developer.
- Common uses include: generating code snippets, suggesting fixes, accelerating routine tasks, surfacing design patterns or documentation, and scaffolding prototypes.
- Major limitations and frictions reported by practitioners:
- Errors and hallucinations: output can be incorrect, incomplete, or misleading.
- Context gaps: AI lacks full project context, design rationale, and long-term constraints.
- Integration cost: AI outputs often need human revision, testing, and integration into existing systems.
- Quality, security, and IP concerns: generated code may introduce vulnerabilities or licensing ambiguity.
- Tooling and workflow mismatch: current AI tools are not always well integrated into team processes or CI/CD pipelines.
- Human roles shift toward oversight, curation, specification, and orchestration of multiple AI components and tools.
- The “AI orchestra” concept: future development will involve coordinated ensembles of specialized AI agents (code generation, test generation, dependency analysis, security scanning, etc.) whose outputs must be orchestrated by humans and higher-level controllers.
- This orchestration increases demand for skills in system design, AI tooling, and coordination — not just coding.
Data & Methods
- Basis: a field study of experienced IT professionals at Netlight Consulting GmbH (a Swedish-founded, internationally operating IT consultancy).
- The paper draws on first-hand empirical work with practitioners to document concrete AI uses, difficulties, and evolving practices.
- Methods (as reported): qualitative, practice-focused analysis of how AI is embedded in day-to-day IT development; evidence appears to come from interviews, observations, and analysis of workflows and tools used by experienced consultants.
- Limitations of the evidence:
- Single-firm focus (Netlight) may limit generalizability across industries, firm sizes, or geographic contexts.
- Fast-evolving AI toolchains mean findings could shift as models and integrations improve.
- The study emphasizes practitioner experience and may not quantify productivity or labor-market outcomes.
Implications for AI Economics
- Task reallocation and complementarities: AI substitutes for routine coding work but complements higher-order tasks (system architecture, integration, orchestration). Demand will rise for complementary skills (coordination, specification, AI-tooling expertise).
- Productivity measurement: Standard metrics may understate productivity gains or shifts because AI changes the mix of tasks and introduces new coordination costs; careful measurement of output quality and integration time is necessary.
- Wage and skill dynamics: Skilled developers who can orchestrate AI and manage complex workflows may see increased wage premiums; mid-level routine tasks face downward pressure or require upskilling.
- Organizational change and returns to scale: Firms that build effective orchestration layers and integrate AI across pipelines may capture outsized gains, increasing winner-take-all dynamics and concentration in IT services and platforms.
- Markets for complementary goods: Demand will grow for orchestration platforms, testing/verification tools, secure code-generation services, and team-level integrations—creating new market segments and monopolistic-leveraging opportunities for dominant tool providers.
- Employment and task scope: While AI may reduce time on coding chores, it may expand demand for roles that supervise AI ensembles, audit outputs (security, compliance), and maintain long-term system health.
- Policy and governance: Issues of liability, IP, security, and certification of AI-generated code become salient. Regulators and firms will need standards for provenance, testing, and accountability in AI-assisted development.
- Research needs: Quantitative studies to measure net productivity effects, longitudinal work on skill trajectories, and comparative analyses across firm types and industries to assess general equilibrium effects.
Assessment
Claims (19)
| Claim | Direction | Confidence | Outcome | Details |
|---|---|---|---|---|
| Experienced IT professionals at Netlight are already integrating AI tools into everyday development work. Adoption Rate | positive | high | extent of AI tool use in day-to-day development workflows |
0.09
|
| Practitioners use AI primarily as a practical assistant (coding, debugging, prototyping, knowledge retrieval) rather than as a fully autonomous developer. Task Allocation | positive | high | types of tasks assigned to AI (assistant vs autonomous development) |
0.09
|
| Common uses of AI among practitioners include generating code snippets, suggesting fixes, accelerating routine tasks, surfacing design patterns or documentation, and scaffolding prototypes. Task Allocation | positive | high | frequency and nature of AI-assisted activities (code generation, suggestions, prototyping) |
0.09
|
| AI outputs commonly contain errors and hallucinations: generated code can be incorrect, incomplete, or misleading. Error Rate | negative | high | accuracy and correctness of AI-generated outputs |
0.09
|
| AI systems lack full project context, design rationale, and long-term constraints, creating context gaps for development tasks. Output Quality | negative | high | degree of project/contextual awareness in AI-produced recommendations |
0.09
|
| Integration cost: AI-generated outputs often require human revision, testing, and manual integration into existing systems. Task Completion Time | negative | high | human time/effort required to adapt AI outputs for production |
0.09
|
| Generated code can introduce security vulnerabilities and licensing/IP ambiguity, raising quality, security, and IP concerns. Regulatory Compliance | negative | medium | presence of security vulnerabilities and IP/licensing risk in AI-generated code (reported concerns) |
0.05
|
| Current AI tooling often mismatches existing team workflows and CI/CD pipelines, reducing seamless adoption. Adoption Rate | negative | medium | compatibility of AI tools with team processes and CI/CD |
0.05
|
| Human roles are shifting toward oversight, curation, specification, and orchestration of multiple AI components and tools. Task Allocation | mixed | medium | changes in role responsibilities (oversight, curation, orchestration) among developers |
0.05
|
| Authors propose the 'AI orchestra' concept: future development will involve coordinated ensembles of specialized AI agents (code generation, test generation, dependency analysis, security scanning) orchestrated by humans and higher-level controllers. Innovation Output | speculative | low | anticipated architecture of AI tool ecosystems (multiple specialized agents coordinated) |
0.03
|
| The need to orchestrate AI ensembles increases demand for skills in system design, AI-tooling, and coordination rather than only coding. Skill Acquisition | positive | low | demand for complementary skills (system design, AI-tooling, coordination) |
0.03
|
| AI substitutes for routine coding tasks but complements higher-order tasks such as system architecture, integration, and orchestration. Task Allocation | mixed | medium | task substitution/complementarity between AI and human developers (routine vs higher-order tasks) |
0.05
|
| Standard productivity metrics may understate AI-related productivity changes because AI alters task mixes and adds coordination costs. Developer Productivity | mixed | low | adequacy of standard productivity metrics to capture AI-induced changes |
0.03
|
| Skilled developers who can orchestrate AI may see increased wage premiums, while mid-level routine tasks face downward pressure or need upskilling. Wages | mixed | low | wage and demand shifts across skill levels in software development |
0.03
|
| Firms that build effective orchestration layers and integrate AI across pipelines may capture outsized gains, increasing winner-take-all dynamics and concentration. Market Structure | positive | speculative | firm-level returns and market concentration from AI orchestration capabilities |
0.01
|
| Demand will increase for complementary goods: orchestration platforms, testing/verification tools, secure code-generation services, and team-level integrations. Adoption Rate | positive | low | market demand for AI-complementary tools and services |
0.03
|
| Employment will shift: while AI reduces time spent on coding chores, demand may expand for roles that supervise AI ensembles, audit outputs, and maintain long-term system health. Employment | mixed | low | employment composition and task allocation in software development |
0.03
|
| Policy and governance issues become salient: liability, IP, security, and certification of AI-generated code require new standards for provenance, testing, and accountability. Governance And Regulation | neutral | medium | need for regulatory standards and governance mechanisms for AI-assisted development |
0.05
|
| Further quantitative and comparative research is needed to measure net productivity effects, skill trajectories, and generalizability across firm types and industries. Research Productivity | null_result | high | gaps in current empirical evidence (lack of quantitative, longitudinal, cross-firm studies) |
0.09
|